00:00:00.001 Started by upstream project "autotest-nightly" build number 4344 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3707 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.125 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.155 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.166 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.178 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.178 > git config core.sparsecheckout # timeout=10 00:00:06.189 > git read-tree -mu HEAD # timeout=10 00:00:06.204 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.230 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.230 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.395 [Pipeline] Start of Pipeline 00:00:06.408 [Pipeline] library 00:00:06.409 Loading library shm_lib@master 00:00:06.410 Library shm_lib@master is cached. Copying from home. 00:00:06.421 [Pipeline] node 00:00:06.434 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.436 [Pipeline] { 00:00:06.445 [Pipeline] catchError 00:00:06.446 [Pipeline] { 00:00:06.461 [Pipeline] wrap 00:00:06.470 [Pipeline] { 00:00:06.482 [Pipeline] stage 00:00:06.484 [Pipeline] { (Prologue) 00:00:06.500 [Pipeline] echo 00:00:06.501 Node: VM-host-SM38 00:00:06.506 [Pipeline] cleanWs 00:00:06.517 [WS-CLEANUP] Deleting project workspace... 00:00:06.517 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.524 [WS-CLEANUP] done 00:00:06.715 [Pipeline] setCustomBuildProperty 00:00:06.784 [Pipeline] httpRequest 00:00:07.265 [Pipeline] echo 00:00:07.266 Sorcerer 10.211.164.20 is alive 00:00:07.276 [Pipeline] retry 00:00:07.278 [Pipeline] { 00:00:07.292 [Pipeline] httpRequest 00:00:07.298 HttpMethod: GET 00:00:07.298 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.298 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.300 Response Code: HTTP/1.1 200 OK 00:00:07.300 Success: Status code 200 is in the accepted range: 200,404 00:00:07.301 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.397 [Pipeline] } 00:00:08.410 [Pipeline] // retry 00:00:08.417 [Pipeline] sh 00:00:08.700 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.715 [Pipeline] httpRequest 00:00:09.794 [Pipeline] echo 00:00:09.795 Sorcerer 10.211.164.20 is alive 00:00:09.805 [Pipeline] retry 00:00:09.807 [Pipeline] { 00:00:09.821 [Pipeline] httpRequest 00:00:09.825 HttpMethod: GET 00:00:09.826 URL: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:09.827 Sending request to url: http://10.211.164.20/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:09.848 Response Code: HTTP/1.1 200 OK 00:00:09.849 Success: Status code 200 is in the accepted range: 200,404 00:00:09.849 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:04.231 [Pipeline] } 00:01:04.247 [Pipeline] // retry 00:01:04.254 [Pipeline] sh 00:01:04.537 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:07.863 [Pipeline] sh 00:01:08.149 + git -C spdk log --oneline -n5 00:01:08.149 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:08.149 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:08.149 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:08.149 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:08.149 60adca7e1 lib/mlx5: API to configure UMR 00:01:08.172 [Pipeline] writeFile 00:01:08.189 [Pipeline] sh 00:01:08.478 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.492 [Pipeline] sh 00:01:08.778 + cat autorun-spdk.conf 00:01:08.778 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.778 SPDK_TEST_NVME=1 00:01:08.778 SPDK_TEST_FTL=1 00:01:08.778 SPDK_TEST_ISAL=1 00:01:08.778 SPDK_RUN_ASAN=1 00:01:08.778 SPDK_RUN_UBSAN=1 00:01:08.778 SPDK_TEST_XNVME=1 00:01:08.778 SPDK_TEST_NVME_FDP=1 00:01:08.778 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.788 RUN_NIGHTLY=1 00:01:08.790 [Pipeline] } 00:01:08.805 [Pipeline] // stage 00:01:08.822 [Pipeline] stage 00:01:08.824 [Pipeline] { (Run VM) 00:01:08.837 [Pipeline] sh 00:01:09.124 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.124 + echo 'Start stage prepare_nvme.sh' 00:01:09.124 Start stage prepare_nvme.sh 00:01:09.124 + [[ -n 5 ]] 00:01:09.124 + disk_prefix=ex5 00:01:09.124 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:09.124 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:09.124 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:09.124 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.124 ++ SPDK_TEST_NVME=1 00:01:09.124 ++ SPDK_TEST_FTL=1 00:01:09.124 ++ SPDK_TEST_ISAL=1 00:01:09.124 ++ SPDK_RUN_ASAN=1 00:01:09.124 ++ SPDK_RUN_UBSAN=1 00:01:09.124 ++ SPDK_TEST_XNVME=1 00:01:09.124 ++ SPDK_TEST_NVME_FDP=1 00:01:09.124 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.124 ++ RUN_NIGHTLY=1 00:01:09.124 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:09.124 + nvme_files=() 00:01:09.124 + declare -A nvme_files 00:01:09.124 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.124 + nvme_files['nvme.img']=5G 00:01:09.124 + nvme_files['nvme-cmb.img']=5G 00:01:09.124 + nvme_files['nvme-multi0.img']=4G 00:01:09.124 + nvme_files['nvme-multi1.img']=4G 00:01:09.124 + nvme_files['nvme-multi2.img']=4G 00:01:09.124 + nvme_files['nvme-openstack.img']=8G 00:01:09.124 + nvme_files['nvme-zns.img']=5G 00:01:09.124 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.124 + (( SPDK_TEST_FTL == 1 )) 00:01:09.124 + nvme_files["nvme-ftl.img"]=6G 00:01:09.124 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.124 + nvme_files["nvme-fdp.img"]=1G 00:01:09.124 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.124 + for nvme in "${!nvme_files[@]}" 00:01:09.124 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:09.386 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.386 + for nvme in "${!nvme_files[@]}" 00:01:09.386 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:10.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:10.375 + for nvme in "${!nvme_files[@]}" 00:01:10.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:10.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.375 + for nvme in "${!nvme_files[@]}" 00:01:10.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:10.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.375 + for nvme in "${!nvme_files[@]}" 00:01:10.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:10.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.375 + for nvme in "${!nvme_files[@]}" 00:01:10.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:10.637 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.637 + for nvme in "${!nvme_files[@]}" 00:01:10.637 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:10.899 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.899 + for nvme in "${!nvme_files[@]}" 00:01:10.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:11.160 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:11.160 + for nvme in "${!nvme_files[@]}" 00:01:11.160 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:11.734 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.734 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:11.734 + echo 'End stage prepare_nvme.sh' 00:01:11.734 End stage prepare_nvme.sh 00:01:12.006 [Pipeline] sh 00:01:12.288 + DISTRO=fedora39 00:01:12.288 + CPUS=10 00:01:12.288 + RAM=12288 00:01:12.288 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.288 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:12.288 00:01:12.288 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:12.288 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:12.288 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:12.288 HELP=0 00:01:12.288 DRY_RUN=0 00:01:12.288 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:12.288 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:12.288 NVME_AUTO_CREATE=0 00:01:12.288 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:12.288 NVME_CMB=,,,, 00:01:12.288 NVME_PMR=,,,, 00:01:12.288 NVME_ZNS=,,,, 00:01:12.288 NVME_MS=true,,,, 00:01:12.288 NVME_FDP=,,,on, 00:01:12.288 SPDK_VAGRANT_DISTRO=fedora39 00:01:12.288 SPDK_VAGRANT_VMCPU=10 00:01:12.288 SPDK_VAGRANT_VMRAM=12288 00:01:12.288 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.288 SPDK_VAGRANT_HTTP_PROXY= 00:01:12.288 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.288 SPDK_OPENSTACK_NETWORK=0 00:01:12.288 VAGRANT_PACKAGE_BOX=0 00:01:12.288 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.288 FORCE_DISTRO=true 00:01:12.288 VAGRANT_BOX_VERSION= 00:01:12.288 EXTRA_VAGRANTFILES= 00:01:12.288 NIC_MODEL=e1000 00:01:12.288 00:01:12.288 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:12.288 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:14.839 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.101 ==> default: Creating image (snapshot of base box volume). 00:01:15.362 ==> default: Creating domain with the following settings... 00:01:15.362 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733591868_d9e886844b8be3453ba7 00:01:15.362 ==> default: -- Domain type: kvm 00:01:15.362 ==> default: -- Cpus: 10 00:01:15.362 ==> default: -- Feature: acpi 00:01:15.362 ==> default: -- Feature: apic 00:01:15.362 ==> default: -- Feature: pae 00:01:15.362 ==> default: -- Memory: 12288M 00:01:15.362 ==> default: -- Memory Backing: hugepages: 00:01:15.362 ==> default: -- Management MAC: 00:01:15.362 ==> default: -- Loader: 00:01:15.362 ==> default: -- Nvram: 00:01:15.362 ==> default: -- Base box: spdk/fedora39 00:01:15.362 ==> default: -- Storage pool: default 00:01:15.362 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733591868_d9e886844b8be3453ba7.img (20G) 00:01:15.362 ==> default: -- Volume Cache: default 00:01:15.362 ==> default: -- Kernel: 00:01:15.362 ==> default: -- Initrd: 00:01:15.362 ==> default: -- Graphics Type: vnc 00:01:15.362 ==> default: -- Graphics Port: -1 00:01:15.362 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.362 ==> default: -- Graphics Password: Not defined 00:01:15.362 ==> default: -- Video Type: cirrus 00:01:15.363 ==> default: -- Video VRAM: 9216 00:01:15.363 ==> default: -- Sound Type: 00:01:15.363 ==> default: -- Keymap: en-us 00:01:15.363 ==> default: -- TPM Path: 00:01:15.363 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.363 ==> default: -- Command line args: 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:15.363 ==> default: -> value=-drive, 00:01:15.363 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:15.363 ==> default: -> value=-device, 00:01:15.363 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.363 ==> default: Creating shared folders metadata... 00:01:15.363 ==> default: Starting domain. 00:01:16.745 ==> default: Waiting for domain to get an IP address... 00:01:34.869 ==> default: Waiting for SSH to become available... 00:01:34.870 ==> default: Configuring and enabling network interfaces... 00:01:38.175 default: SSH address: 192.168.121.169:22 00:01:38.175 default: SSH username: vagrant 00:01:38.175 default: SSH auth method: private key 00:01:40.091 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.234 ==> default: Mounting SSHFS shared folder... 00:01:49.621 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:49.621 ==> default: Checking Mount.. 00:01:50.595 ==> default: Folder Successfully Mounted! 00:01:50.595 00:01:50.595 SUCCESS! 00:01:50.595 00:01:50.595 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:50.595 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:50.595 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:50.595 00:01:50.605 [Pipeline] } 00:01:50.620 [Pipeline] // stage 00:01:50.629 [Pipeline] dir 00:01:50.630 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:50.632 [Pipeline] { 00:01:50.644 [Pipeline] catchError 00:01:50.646 [Pipeline] { 00:01:50.659 [Pipeline] sh 00:01:50.945 + vagrant ssh-config --host vagrant 00:01:50.945 + sed -ne '/^Host/,$p' 00:01:50.945 + tee ssh_conf 00:01:53.495 Host vagrant 00:01:53.495 HostName 192.168.121.169 00:01:53.495 User vagrant 00:01:53.495 Port 22 00:01:53.495 UserKnownHostsFile /dev/null 00:01:53.495 StrictHostKeyChecking no 00:01:53.495 PasswordAuthentication no 00:01:53.495 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:53.495 IdentitiesOnly yes 00:01:53.495 LogLevel FATAL 00:01:53.495 ForwardAgent yes 00:01:53.495 ForwardX11 yes 00:01:53.495 00:01:53.511 [Pipeline] withEnv 00:01:53.514 [Pipeline] { 00:01:53.528 [Pipeline] sh 00:01:53.812 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:53.812 source /etc/os-release 00:01:53.812 [[ -e /image.version ]] && img=$(< /image.version) 00:01:53.812 # Minimal, systemd-like check. 00:01:53.812 if [[ -e /.dockerenv ]]; then 00:01:53.812 # Clear garbage from the node'\''s name: 00:01:53.812 # agt-er_autotest_547-896 -> autotest_547-896 00:01:53.812 # $HOSTNAME is the actual container id 00:01:53.812 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:53.812 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:53.812 # We can assume this is a mount from a host where container is running, 00:01:53.812 # so fetch its hostname to easily identify the target swarm worker. 00:01:53.812 container="$(< /etc/hostname) ($agent)" 00:01:53.812 else 00:01:53.812 # Fallback 00:01:53.812 container=$agent 00:01:53.812 fi 00:01:53.812 fi 00:01:53.812 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:53.812 ' 00:01:54.086 [Pipeline] } 00:01:54.103 [Pipeline] // withEnv 00:01:54.112 [Pipeline] setCustomBuildProperty 00:01:54.127 [Pipeline] stage 00:01:54.130 [Pipeline] { (Tests) 00:01:54.148 [Pipeline] sh 00:01:54.433 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:54.710 [Pipeline] sh 00:01:54.995 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:55.273 [Pipeline] timeout 00:01:55.274 Timeout set to expire in 50 min 00:01:55.275 [Pipeline] { 00:01:55.288 [Pipeline] sh 00:01:55.572 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:56.146 HEAD is now at a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:56.159 [Pipeline] sh 00:01:56.441 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:56.717 [Pipeline] sh 00:01:57.073 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:57.350 [Pipeline] sh 00:01:57.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:57.900 ++ readlink -f spdk_repo 00:01:57.900 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:57.900 + [[ -n /home/vagrant/spdk_repo ]] 00:01:57.900 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:57.900 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:57.900 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:57.900 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:57.900 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:57.900 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:57.900 + cd /home/vagrant/spdk_repo 00:01:57.900 + source /etc/os-release 00:01:57.900 ++ NAME='Fedora Linux' 00:01:57.900 ++ VERSION='39 (Cloud Edition)' 00:01:57.900 ++ ID=fedora 00:01:57.900 ++ VERSION_ID=39 00:01:57.900 ++ VERSION_CODENAME= 00:01:57.900 ++ PLATFORM_ID=platform:f39 00:01:57.900 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:57.900 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.900 ++ LOGO=fedora-logo-icon 00:01:57.900 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:57.900 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.900 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:57.900 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.900 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.900 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.900 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:57.900 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.900 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:57.900 ++ SUPPORT_END=2024-11-12 00:01:57.900 ++ VARIANT='Cloud Edition' 00:01:57.900 ++ VARIANT_ID=cloud 00:01:57.900 + uname -a 00:01:57.900 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:57.900 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:58.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:58.447 Hugepages 00:01:58.447 node hugesize free / total 00:01:58.447 node0 1048576kB 0 / 0 00:01:58.447 node0 2048kB 0 / 0 00:01:58.447 00:01:58.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:58.447 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:58.447 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:58.447 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:58.708 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:58.708 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:58.708 + rm -f /tmp/spdk-ld-path 00:01:58.708 + source autorun-spdk.conf 00:01:58.708 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.708 ++ SPDK_TEST_NVME=1 00:01:58.708 ++ SPDK_TEST_FTL=1 00:01:58.708 ++ SPDK_TEST_ISAL=1 00:01:58.708 ++ SPDK_RUN_ASAN=1 00:01:58.708 ++ SPDK_RUN_UBSAN=1 00:01:58.708 ++ SPDK_TEST_XNVME=1 00:01:58.708 ++ SPDK_TEST_NVME_FDP=1 00:01:58.708 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.708 ++ RUN_NIGHTLY=1 00:01:58.708 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:58.708 + [[ -n '' ]] 00:01:58.708 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:58.708 + for M in /var/spdk/build-*-manifest.txt 00:01:58.708 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:58.708 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.708 + for M in /var/spdk/build-*-manifest.txt 00:01:58.708 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:58.708 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.708 + for M in /var/spdk/build-*-manifest.txt 00:01:58.708 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:58.708 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:58.708 ++ uname 00:01:58.708 + [[ Linux == \L\i\n\u\x ]] 00:01:58.708 + sudo dmesg -T 00:01:58.708 + sudo dmesg --clear 00:01:58.708 + dmesg_pid=5034 00:01:58.708 + [[ Fedora Linux == FreeBSD ]] 00:01:58.708 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.708 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:58.708 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:58.708 + [[ -x /usr/src/fio-static/fio ]] 00:01:58.708 + sudo dmesg -Tw 00:01:58.708 + export FIO_BIN=/usr/src/fio-static/fio 00:01:58.708 + FIO_BIN=/usr/src/fio-static/fio 00:01:58.708 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:58.708 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:58.708 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:58.708 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.708 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:58.708 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:58.708 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.708 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:58.708 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.708 17:18:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:58.708 17:18:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.708 17:18:32 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:58.708 17:18:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:58.708 17:18:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:58.968 17:18:32 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:58.968 17:18:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:58.968 17:18:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:58.968 17:18:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:58.968 17:18:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:58.968 17:18:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:58.968 17:18:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.968 17:18:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.968 17:18:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.968 17:18:32 -- paths/export.sh@5 -- $ export PATH 00:01:58.968 17:18:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:58.968 17:18:32 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:58.968 17:18:32 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:58.968 17:18:32 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733591912.XXXXXX 00:01:58.968 17:18:32 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733591912.7u7EaC 00:01:58.968 17:18:32 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:58.968 17:18:32 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:58.968 17:18:32 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:58.968 17:18:32 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:58.968 17:18:32 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:58.968 17:18:32 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:58.968 17:18:32 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:58.968 17:18:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.968 17:18:32 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:58.968 17:18:32 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:58.968 17:18:32 -- pm/common@17 -- $ local monitor 00:01:58.969 17:18:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.969 17:18:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:58.969 17:18:32 -- pm/common@25 -- $ sleep 1 00:01:58.969 17:18:32 -- pm/common@21 -- $ date +%s 00:01:58.969 17:18:32 -- pm/common@21 -- $ date +%s 00:01:58.969 17:18:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733591912 00:01:58.969 17:18:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733591912 00:01:58.969 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733591912_collect-cpu-load.pm.log 00:01:58.969 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733591912_collect-vmstat.pm.log 00:01:59.910 17:18:33 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:59.910 17:18:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:59.910 17:18:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:59.910 17:18:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:59.910 17:18:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:59.910 Sat Dec 7 05:18:33 PM UTC 2024 00:01:59.910 17:18:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:59.910 v25.01-pre-311-ga2f5e1c2d 00:01:59.910 17:18:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:59.910 17:18:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:59.910 17:18:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:59.910 17:18:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:59.910 17:18:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.910 ************************************ 00:01:59.910 START TEST asan 00:01:59.910 ************************************ 00:01:59.910 using asan 00:01:59.910 17:18:33 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:59.910 00:01:59.910 real 0m0.000s 00:01:59.910 user 0m0.000s 00:01:59.910 sys 0m0.000s 00:01:59.910 17:18:33 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:59.910 17:18:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:59.910 ************************************ 00:01:59.910 END TEST asan 00:01:59.910 ************************************ 00:01:59.910 17:18:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:59.910 17:18:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:59.910 17:18:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:59.910 17:18:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:59.910 17:18:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.910 ************************************ 00:01:59.910 START TEST ubsan 00:01:59.910 ************************************ 00:01:59.910 using ubsan 00:01:59.910 17:18:33 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:59.910 00:01:59.910 real 0m0.000s 00:01:59.910 user 0m0.000s 00:01:59.910 sys 0m0.000s 00:01:59.910 17:18:33 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:59.910 ************************************ 00:01:59.910 END TEST ubsan 00:01:59.910 ************************************ 00:01:59.910 17:18:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.171 17:18:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.171 17:18:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.171 17:18:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.171 17:18:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:00.171 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:00.171 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:00.744 Using 'verbs' RDMA provider 00:02:13.919 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:23.913 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:23.913 Creating mk/config.mk...done. 00:02:23.913 Creating mk/cc.flags.mk...done. 00:02:23.913 Type 'make' to build. 00:02:23.913 17:18:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:23.913 17:18:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:23.913 17:18:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:23.913 17:18:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.913 ************************************ 00:02:23.913 START TEST make 00:02:23.913 ************************************ 00:02:23.913 17:18:56 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:23.913 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:23.913 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:23.913 meson setup builddir \ 00:02:23.913 -Dwith-libaio=enabled \ 00:02:23.913 -Dwith-liburing=enabled \ 00:02:23.913 -Dwith-libvfn=disabled \ 00:02:23.913 -Dwith-spdk=disabled \ 00:02:23.913 -Dexamples=false \ 00:02:23.913 -Dtests=false \ 00:02:23.913 -Dtools=false && \ 00:02:23.913 meson compile -C builddir && \ 00:02:23.913 cd -) 00:02:23.913 make[1]: Nothing to be done for 'all'. 00:02:25.287 The Meson build system 00:02:25.287 Version: 1.5.0 00:02:25.287 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:25.287 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:25.287 Build type: native build 00:02:25.287 Project name: xnvme 00:02:25.287 Project version: 0.7.5 00:02:25.287 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.287 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.287 Host machine cpu family: x86_64 00:02:25.287 Host machine cpu: x86_64 00:02:25.287 Message: host_machine.system: linux 00:02:25.287 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:25.287 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:25.287 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:25.287 Run-time dependency threads found: YES 00:02:25.287 Has header "setupapi.h" : NO 00:02:25.287 Has header "linux/blkzoned.h" : YES 00:02:25.287 Has header "linux/blkzoned.h" : YES (cached) 00:02:25.287 Has header "libaio.h" : YES 00:02:25.287 Library aio found: YES 00:02:25.287 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.287 Run-time dependency liburing found: YES 2.2 00:02:25.287 Dependency libvfn skipped: feature with-libvfn disabled 00:02:25.287 Found CMake: /usr/bin/cmake (3.27.7) 00:02:25.287 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:25.287 Subproject spdk : skipped: feature with-spdk disabled 00:02:25.287 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.287 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.287 Library rt found: YES 00:02:25.287 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:25.287 Configuring xnvme_config.h using configuration 00:02:25.287 Configuring xnvme.spec using configuration 00:02:25.287 Run-time dependency bash-completion found: YES 2.11 00:02:25.287 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:25.287 Program cp found: YES (/usr/bin/cp) 00:02:25.287 Build targets in project: 3 00:02:25.287 00:02:25.287 xnvme 0.7.5 00:02:25.287 00:02:25.287 Subprojects 00:02:25.287 spdk : NO Feature 'with-spdk' disabled 00:02:25.287 00:02:25.287 User defined options 00:02:25.287 examples : false 00:02:25.287 tests : false 00:02:25.287 tools : false 00:02:25.287 with-libaio : enabled 00:02:25.287 with-liburing: enabled 00:02:25.287 with-libvfn : disabled 00:02:25.287 with-spdk : disabled 00:02:25.287 00:02:25.287 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.855 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:25.855 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:25.855 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:25.855 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:25.855 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:25.855 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:25.855 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:25.855 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:25.855 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:25.855 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:25.855 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:25.855 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:25.855 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:25.855 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:25.855 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:25.855 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:25.855 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:25.855 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:25.855 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:25.855 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:25.855 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:25.855 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:26.113 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:26.113 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:26.113 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:26.113 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:26.113 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:26.113 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:26.113 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:26.113 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:26.113 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:26.113 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:26.113 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:26.113 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:26.113 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:26.113 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:26.113 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:26.113 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:26.113 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:26.113 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:26.113 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:26.113 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:26.113 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:26.113 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:26.113 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:26.113 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:26.113 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:26.113 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:26.113 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:26.113 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:26.113 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:26.113 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:26.113 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:26.113 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:26.113 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:26.113 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:26.371 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:26.371 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:26.371 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:26.371 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:26.371 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:26.371 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:26.371 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:26.371 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:26.371 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:26.371 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:26.371 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:26.371 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:26.371 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:26.371 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:26.371 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:26.371 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:26.371 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:26.371 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:26.629 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:26.629 [75/76] Linking static target lib/libxnvme.a 00:02:26.629 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:26.629 INFO: autodetecting backend as ninja 00:02:26.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:26.887 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:33.442 The Meson build system 00:02:33.442 Version: 1.5.0 00:02:33.442 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:33.442 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:33.442 Build type: native build 00:02:33.442 Program cat found: YES (/usr/bin/cat) 00:02:33.442 Project name: DPDK 00:02:33.442 Project version: 24.03.0 00:02:33.442 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:33.442 C linker for the host machine: cc ld.bfd 2.40-14 00:02:33.442 Host machine cpu family: x86_64 00:02:33.442 Host machine cpu: x86_64 00:02:33.442 Message: ## Building in Developer Mode ## 00:02:33.442 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.442 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:33.442 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.442 Program python3 found: YES (/usr/bin/python3) 00:02:33.442 Program cat found: YES (/usr/bin/cat) 00:02:33.442 Compiler for C supports arguments -march=native: YES 00:02:33.442 Checking for size of "void *" : 8 00:02:33.442 Checking for size of "void *" : 8 (cached) 00:02:33.442 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:33.442 Library m found: YES 00:02:33.442 Library numa found: YES 00:02:33.442 Has header "numaif.h" : YES 00:02:33.442 Library fdt found: NO 00:02:33.442 Library execinfo found: NO 00:02:33.442 Has header "execinfo.h" : YES 00:02:33.442 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:33.442 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.442 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.442 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.442 Run-time dependency openssl found: YES 3.1.1 00:02:33.442 Run-time dependency libpcap found: YES 1.10.4 00:02:33.442 Has header "pcap.h" with dependency libpcap: YES 00:02:33.442 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.442 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.442 Compiler for C supports arguments -Wformat: YES 00:02:33.442 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.442 Compiler for C supports arguments -Wformat-security: NO 00:02:33.442 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.442 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.442 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.442 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.442 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.442 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.442 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.442 Compiler for C supports arguments -Wundef: YES 00:02:33.442 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.442 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.442 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.442 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.442 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.442 Program objdump found: YES (/usr/bin/objdump) 00:02:33.442 Compiler for C supports arguments -mavx512f: YES 00:02:33.443 Checking if "AVX512 checking" compiles: YES 00:02:33.443 Fetching value of define "__SSE4_2__" : 1 00:02:33.443 Fetching value of define "__AES__" : 1 00:02:33.443 Fetching value of define "__AVX__" : 1 00:02:33.443 Fetching value of define "__AVX2__" : 1 00:02:33.443 Fetching value of define "__AVX512BW__" : 1 00:02:33.443 Fetching value of define "__AVX512CD__" : 1 00:02:33.443 Fetching value of define "__AVX512DQ__" : 1 00:02:33.443 Fetching value of define "__AVX512F__" : 1 00:02:33.443 Fetching value of define "__AVX512VL__" : 1 00:02:33.443 Fetching value of define "__PCLMUL__" : 1 00:02:33.443 Fetching value of define "__RDRND__" : 1 00:02:33.443 Fetching value of define "__RDSEED__" : 1 00:02:33.443 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:33.443 Fetching value of define "__znver1__" : (undefined) 00:02:33.443 Fetching value of define "__znver2__" : (undefined) 00:02:33.443 Fetching value of define "__znver3__" : (undefined) 00:02:33.443 Fetching value of define "__znver4__" : (undefined) 00:02:33.443 Library asan found: YES 00:02:33.443 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.443 Message: lib/log: Defining dependency "log" 00:02:33.443 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.443 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.443 Library rt found: YES 00:02:33.443 Checking for function "getentropy" : NO 00:02:33.443 Message: lib/eal: Defining dependency "eal" 00:02:33.443 Message: lib/ring: Defining dependency "ring" 00:02:33.443 Message: lib/rcu: Defining dependency "rcu" 00:02:33.443 Message: lib/mempool: Defining dependency "mempool" 00:02:33.443 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.443 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.443 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:33.443 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:33.443 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:33.443 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:33.443 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:33.443 Compiler for C supports arguments -mpclmul: YES 00:02:33.443 Compiler for C supports arguments -maes: YES 00:02:33.443 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.443 Compiler for C supports arguments -mavx512bw: YES 00:02:33.443 Compiler for C supports arguments -mavx512dq: YES 00:02:33.443 Compiler for C supports arguments -mavx512vl: YES 00:02:33.443 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.443 Compiler for C supports arguments -mavx2: YES 00:02:33.443 Compiler for C supports arguments -mavx: YES 00:02:33.443 Message: lib/net: Defining dependency "net" 00:02:33.443 Message: lib/meter: Defining dependency "meter" 00:02:33.443 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.443 Message: lib/pci: Defining dependency "pci" 00:02:33.443 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.443 Message: lib/hash: Defining dependency "hash" 00:02:33.443 Message: lib/timer: Defining dependency "timer" 00:02:33.443 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.443 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.443 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.443 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.443 Message: lib/power: Defining dependency "power" 00:02:33.443 Message: lib/reorder: Defining dependency "reorder" 00:02:33.443 Message: lib/security: Defining dependency "security" 00:02:33.443 Has header "linux/userfaultfd.h" : YES 00:02:33.443 Has header "linux/vduse.h" : YES 00:02:33.443 Message: lib/vhost: Defining dependency "vhost" 00:02:33.443 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.443 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.443 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.443 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.443 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:33.443 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:33.443 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:33.443 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:33.443 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:33.443 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:33.443 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:33.443 Configuring doxy-api-html.conf using configuration 00:02:33.443 Configuring doxy-api-man.conf using configuration 00:02:33.443 Program mandb found: YES (/usr/bin/mandb) 00:02:33.443 Program sphinx-build found: NO 00:02:33.443 Configuring rte_build_config.h using configuration 00:02:33.443 Message: 00:02:33.443 ================= 00:02:33.443 Applications Enabled 00:02:33.443 ================= 00:02:33.443 00:02:33.443 apps: 00:02:33.443 00:02:33.443 00:02:33.443 Message: 00:02:33.443 ================= 00:02:33.443 Libraries Enabled 00:02:33.443 ================= 00:02:33.443 00:02:33.443 libs: 00:02:33.443 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:33.443 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:33.443 cryptodev, dmadev, power, reorder, security, vhost, 00:02:33.443 00:02:33.443 Message: 00:02:33.443 =============== 00:02:33.443 Drivers Enabled 00:02:33.443 =============== 00:02:33.443 00:02:33.443 common: 00:02:33.443 00:02:33.443 bus: 00:02:33.443 pci, vdev, 00:02:33.443 mempool: 00:02:33.443 ring, 00:02:33.443 dma: 00:02:33.443 00:02:33.443 net: 00:02:33.443 00:02:33.443 crypto: 00:02:33.443 00:02:33.443 compress: 00:02:33.443 00:02:33.443 vdpa: 00:02:33.443 00:02:33.443 00:02:33.443 Message: 00:02:33.443 ================= 00:02:33.443 Content Skipped 00:02:33.443 ================= 00:02:33.443 00:02:33.443 apps: 00:02:33.443 dumpcap: explicitly disabled via build config 00:02:33.443 graph: explicitly disabled via build config 00:02:33.443 pdump: explicitly disabled via build config 00:02:33.443 proc-info: explicitly disabled via build config 00:02:33.443 test-acl: explicitly disabled via build config 00:02:33.443 test-bbdev: explicitly disabled via build config 00:02:33.443 test-cmdline: explicitly disabled via build config 00:02:33.443 test-compress-perf: explicitly disabled via build config 00:02:33.443 test-crypto-perf: explicitly disabled via build config 00:02:33.443 test-dma-perf: explicitly disabled via build config 00:02:33.443 test-eventdev: explicitly disabled via build config 00:02:33.443 test-fib: explicitly disabled via build config 00:02:33.443 test-flow-perf: explicitly disabled via build config 00:02:33.443 test-gpudev: explicitly disabled via build config 00:02:33.443 test-mldev: explicitly disabled via build config 00:02:33.443 test-pipeline: explicitly disabled via build config 00:02:33.443 test-pmd: explicitly disabled via build config 00:02:33.443 test-regex: explicitly disabled via build config 00:02:33.443 test-sad: explicitly disabled via build config 00:02:33.443 test-security-perf: explicitly disabled via build config 00:02:33.443 00:02:33.443 libs: 00:02:33.443 argparse: explicitly disabled via build config 00:02:33.443 metrics: explicitly disabled via build config 00:02:33.443 acl: explicitly disabled via build config 00:02:33.443 bbdev: explicitly disabled via build config 00:02:33.443 bitratestats: explicitly disabled via build config 00:02:33.443 bpf: explicitly disabled via build config 00:02:33.443 cfgfile: explicitly disabled via build config 00:02:33.443 distributor: explicitly disabled via build config 00:02:33.443 efd: explicitly disabled via build config 00:02:33.443 eventdev: explicitly disabled via build config 00:02:33.443 dispatcher: explicitly disabled via build config 00:02:33.443 gpudev: explicitly disabled via build config 00:02:33.443 gro: explicitly disabled via build config 00:02:33.443 gso: explicitly disabled via build config 00:02:33.443 ip_frag: explicitly disabled via build config 00:02:33.443 jobstats: explicitly disabled via build config 00:02:33.443 latencystats: explicitly disabled via build config 00:02:33.443 lpm: explicitly disabled via build config 00:02:33.443 member: explicitly disabled via build config 00:02:33.443 pcapng: explicitly disabled via build config 00:02:33.443 rawdev: explicitly disabled via build config 00:02:33.443 regexdev: explicitly disabled via build config 00:02:33.443 mldev: explicitly disabled via build config 00:02:33.443 rib: explicitly disabled via build config 00:02:33.443 sched: explicitly disabled via build config 00:02:33.443 stack: explicitly disabled via build config 00:02:33.443 ipsec: explicitly disabled via build config 00:02:33.443 pdcp: explicitly disabled via build config 00:02:33.443 fib: explicitly disabled via build config 00:02:33.443 port: explicitly disabled via build config 00:02:33.443 pdump: explicitly disabled via build config 00:02:33.443 table: explicitly disabled via build config 00:02:33.443 pipeline: explicitly disabled via build config 00:02:33.443 graph: explicitly disabled via build config 00:02:33.443 node: explicitly disabled via build config 00:02:33.443 00:02:33.443 drivers: 00:02:33.443 common/cpt: not in enabled drivers build config 00:02:33.443 common/dpaax: not in enabled drivers build config 00:02:33.443 common/iavf: not in enabled drivers build config 00:02:33.443 common/idpf: not in enabled drivers build config 00:02:33.443 common/ionic: not in enabled drivers build config 00:02:33.443 common/mvep: not in enabled drivers build config 00:02:33.443 common/octeontx: not in enabled drivers build config 00:02:33.443 bus/auxiliary: not in enabled drivers build config 00:02:33.443 bus/cdx: not in enabled drivers build config 00:02:33.443 bus/dpaa: not in enabled drivers build config 00:02:33.443 bus/fslmc: not in enabled drivers build config 00:02:33.443 bus/ifpga: not in enabled drivers build config 00:02:33.443 bus/platform: not in enabled drivers build config 00:02:33.443 bus/uacce: not in enabled drivers build config 00:02:33.443 bus/vmbus: not in enabled drivers build config 00:02:33.443 common/cnxk: not in enabled drivers build config 00:02:33.443 common/mlx5: not in enabled drivers build config 00:02:33.443 common/nfp: not in enabled drivers build config 00:02:33.443 common/nitrox: not in enabled drivers build config 00:02:33.444 common/qat: not in enabled drivers build config 00:02:33.444 common/sfc_efx: not in enabled drivers build config 00:02:33.444 mempool/bucket: not in enabled drivers build config 00:02:33.444 mempool/cnxk: not in enabled drivers build config 00:02:33.444 mempool/dpaa: not in enabled drivers build config 00:02:33.444 mempool/dpaa2: not in enabled drivers build config 00:02:33.444 mempool/octeontx: not in enabled drivers build config 00:02:33.444 mempool/stack: not in enabled drivers build config 00:02:33.444 dma/cnxk: not in enabled drivers build config 00:02:33.444 dma/dpaa: not in enabled drivers build config 00:02:33.444 dma/dpaa2: not in enabled drivers build config 00:02:33.444 dma/hisilicon: not in enabled drivers build config 00:02:33.444 dma/idxd: not in enabled drivers build config 00:02:33.444 dma/ioat: not in enabled drivers build config 00:02:33.444 dma/skeleton: not in enabled drivers build config 00:02:33.444 net/af_packet: not in enabled drivers build config 00:02:33.444 net/af_xdp: not in enabled drivers build config 00:02:33.444 net/ark: not in enabled drivers build config 00:02:33.444 net/atlantic: not in enabled drivers build config 00:02:33.444 net/avp: not in enabled drivers build config 00:02:33.444 net/axgbe: not in enabled drivers build config 00:02:33.444 net/bnx2x: not in enabled drivers build config 00:02:33.444 net/bnxt: not in enabled drivers build config 00:02:33.444 net/bonding: not in enabled drivers build config 00:02:33.444 net/cnxk: not in enabled drivers build config 00:02:33.444 net/cpfl: not in enabled drivers build config 00:02:33.444 net/cxgbe: not in enabled drivers build config 00:02:33.444 net/dpaa: not in enabled drivers build config 00:02:33.444 net/dpaa2: not in enabled drivers build config 00:02:33.444 net/e1000: not in enabled drivers build config 00:02:33.444 net/ena: not in enabled drivers build config 00:02:33.444 net/enetc: not in enabled drivers build config 00:02:33.444 net/enetfec: not in enabled drivers build config 00:02:33.444 net/enic: not in enabled drivers build config 00:02:33.444 net/failsafe: not in enabled drivers build config 00:02:33.444 net/fm10k: not in enabled drivers build config 00:02:33.444 net/gve: not in enabled drivers build config 00:02:33.444 net/hinic: not in enabled drivers build config 00:02:33.444 net/hns3: not in enabled drivers build config 00:02:33.444 net/i40e: not in enabled drivers build config 00:02:33.444 net/iavf: not in enabled drivers build config 00:02:33.444 net/ice: not in enabled drivers build config 00:02:33.444 net/idpf: not in enabled drivers build config 00:02:33.444 net/igc: not in enabled drivers build config 00:02:33.444 net/ionic: not in enabled drivers build config 00:02:33.444 net/ipn3ke: not in enabled drivers build config 00:02:33.444 net/ixgbe: not in enabled drivers build config 00:02:33.444 net/mana: not in enabled drivers build config 00:02:33.444 net/memif: not in enabled drivers build config 00:02:33.444 net/mlx4: not in enabled drivers build config 00:02:33.444 net/mlx5: not in enabled drivers build config 00:02:33.444 net/mvneta: not in enabled drivers build config 00:02:33.444 net/mvpp2: not in enabled drivers build config 00:02:33.444 net/netvsc: not in enabled drivers build config 00:02:33.444 net/nfb: not in enabled drivers build config 00:02:33.444 net/nfp: not in enabled drivers build config 00:02:33.444 net/ngbe: not in enabled drivers build config 00:02:33.444 net/null: not in enabled drivers build config 00:02:33.444 net/octeontx: not in enabled drivers build config 00:02:33.444 net/octeon_ep: not in enabled drivers build config 00:02:33.444 net/pcap: not in enabled drivers build config 00:02:33.444 net/pfe: not in enabled drivers build config 00:02:33.444 net/qede: not in enabled drivers build config 00:02:33.444 net/ring: not in enabled drivers build config 00:02:33.444 net/sfc: not in enabled drivers build config 00:02:33.444 net/softnic: not in enabled drivers build config 00:02:33.444 net/tap: not in enabled drivers build config 00:02:33.444 net/thunderx: not in enabled drivers build config 00:02:33.444 net/txgbe: not in enabled drivers build config 00:02:33.444 net/vdev_netvsc: not in enabled drivers build config 00:02:33.444 net/vhost: not in enabled drivers build config 00:02:33.444 net/virtio: not in enabled drivers build config 00:02:33.444 net/vmxnet3: not in enabled drivers build config 00:02:33.444 raw/*: missing internal dependency, "rawdev" 00:02:33.444 crypto/armv8: not in enabled drivers build config 00:02:33.444 crypto/bcmfs: not in enabled drivers build config 00:02:33.444 crypto/caam_jr: not in enabled drivers build config 00:02:33.444 crypto/ccp: not in enabled drivers build config 00:02:33.444 crypto/cnxk: not in enabled drivers build config 00:02:33.444 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.444 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.444 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.444 crypto/mlx5: not in enabled drivers build config 00:02:33.444 crypto/mvsam: not in enabled drivers build config 00:02:33.444 crypto/nitrox: not in enabled drivers build config 00:02:33.444 crypto/null: not in enabled drivers build config 00:02:33.444 crypto/octeontx: not in enabled drivers build config 00:02:33.444 crypto/openssl: not in enabled drivers build config 00:02:33.444 crypto/scheduler: not in enabled drivers build config 00:02:33.444 crypto/uadk: not in enabled drivers build config 00:02:33.444 crypto/virtio: not in enabled drivers build config 00:02:33.444 compress/isal: not in enabled drivers build config 00:02:33.444 compress/mlx5: not in enabled drivers build config 00:02:33.444 compress/nitrox: not in enabled drivers build config 00:02:33.444 compress/octeontx: not in enabled drivers build config 00:02:33.444 compress/zlib: not in enabled drivers build config 00:02:33.444 regex/*: missing internal dependency, "regexdev" 00:02:33.444 ml/*: missing internal dependency, "mldev" 00:02:33.444 vdpa/ifc: not in enabled drivers build config 00:02:33.444 vdpa/mlx5: not in enabled drivers build config 00:02:33.444 vdpa/nfp: not in enabled drivers build config 00:02:33.444 vdpa/sfc: not in enabled drivers build config 00:02:33.444 event/*: missing internal dependency, "eventdev" 00:02:33.444 baseband/*: missing internal dependency, "bbdev" 00:02:33.444 gpu/*: missing internal dependency, "gpudev" 00:02:33.444 00:02:33.444 00:02:33.444 Build targets in project: 84 00:02:33.444 00:02:33.444 DPDK 24.03.0 00:02:33.444 00:02:33.444 User defined options 00:02:33.444 buildtype : debug 00:02:33.444 default_library : shared 00:02:33.444 libdir : lib 00:02:33.444 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:33.444 b_sanitize : address 00:02:33.444 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:33.444 c_link_args : 00:02:33.444 cpu_instruction_set: native 00:02:33.444 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:33.444 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:33.444 enable_docs : false 00:02:33.444 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:33.444 enable_kmods : false 00:02:33.444 max_lcores : 128 00:02:33.444 tests : false 00:02:33.444 00:02:33.444 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.701 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:33.701 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.701 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.701 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.701 [4/267] Linking static target lib/librte_log.a 00:02:33.701 [5/267] Linking static target lib/librte_kvargs.a 00:02:33.959 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.217 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.217 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.217 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.217 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.217 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.217 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.217 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.217 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.217 [15/267] Linking static target lib/librte_telemetry.a 00:02:34.217 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.217 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.217 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:34.475 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.475 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:34.475 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:34.475 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.475 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.732 [24/267] Linking target lib/librte_log.so.24.1 00:02:34.732 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:34.732 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.732 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.732 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.732 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.732 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:34.990 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.990 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:34.990 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.990 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.990 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.990 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.990 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.990 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.990 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.990 [40/267] Linking target lib/librte_telemetry.so.24.1 00:02:34.990 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.990 [42/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.249 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:35.249 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:35.249 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.249 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.526 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.526 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.526 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.526 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.526 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.526 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.526 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.526 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.784 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.784 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.784 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.784 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.784 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.784 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.784 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.041 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.041 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.041 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.041 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.041 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.041 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.298 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.298 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.298 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.298 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.298 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.298 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.298 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.298 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.555 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.555 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.555 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.555 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.555 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:36.555 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:36.555 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.813 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.813 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.813 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.813 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.813 [87/267] Linking static target lib/librte_ring.a 00:02:36.813 [88/267] Linking static target lib/librte_eal.a 00:02:37.071 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:37.071 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.071 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.071 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.328 [93/267] Linking static target lib/librte_rcu.a 00:02:37.328 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.328 [95/267] Linking static target lib/librte_mempool.a 00:02:37.328 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:37.328 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.328 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:37.328 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:37.605 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:37.605 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:37.605 [102/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.605 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.605 [104/267] Linking static target lib/librte_meter.a 00:02:37.605 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.605 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:37.605 [107/267] Linking static target lib/librte_mbuf.a 00:02:37.605 [108/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:37.863 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:37.863 [110/267] Linking static target lib/librte_net.a 00:02:37.863 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.863 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.863 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.863 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.121 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.122 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.122 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.122 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.380 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.380 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.380 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.638 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.638 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.638 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.638 [125/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.638 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.638 [127/267] Linking static target lib/librte_pci.a 00:02:38.638 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.638 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.638 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.896 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.896 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.896 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.896 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.896 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.896 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.896 [137/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.896 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.896 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.896 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.896 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:38.896 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:38.896 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.155 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.155 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.155 [146/267] Linking static target lib/librte_cmdline.a 00:02:39.155 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.155 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.414 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.414 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.414 [151/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.414 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.672 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.672 [154/267] Linking static target lib/librte_timer.a 00:02:39.672 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.672 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.672 [157/267] Linking static target lib/librte_compressdev.a 00:02:39.672 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.672 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.672 [160/267] Linking static target lib/librte_hash.a 00:02:39.932 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.932 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.932 [163/267] Linking static target lib/librte_ethdev.a 00:02:39.932 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:39.932 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.932 [166/267] Linking static target lib/librte_dmadev.a 00:02:40.191 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.191 [168/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:40.191 [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.191 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.191 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:40.450 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.450 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.450 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:40.450 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:40.450 [176/267] Linking static target lib/librte_cryptodev.a 00:02:40.450 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:40.709 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.709 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.709 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.709 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:40.709 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.709 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.709 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.968 [185/267] Linking static target lib/librte_power.a 00:02:40.968 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:40.968 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.968 [188/267] Linking static target lib/librte_reorder.a 00:02:40.968 [189/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.227 [190/267] Linking static target lib/librte_security.a 00:02:41.227 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:41.227 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:41.227 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.485 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.743 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:41.743 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.743 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.743 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:41.743 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:41.743 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:42.001 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:42.001 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:42.001 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:42.001 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:42.001 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:42.258 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:42.258 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:42.258 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.258 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:42.258 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.539 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.539 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.539 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.539 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.539 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.539 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:42.539 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.539 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:42.539 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.539 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.833 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.833 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.833 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.833 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:42.833 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.090 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.348 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:44.281 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.281 [229/267] Linking target lib/librte_eal.so.24.1 00:02:44.281 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:44.539 [231/267] Linking target lib/librte_timer.so.24.1 00:02:44.539 [232/267] Linking target lib/librte_ring.so.24.1 00:02:44.539 [233/267] Linking target lib/librte_pci.so.24.1 00:02:44.539 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:44.539 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:44.539 [236/267] Linking target lib/librte_meter.so.24.1 00:02:44.539 [237/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:44.539 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:44.539 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:44.539 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:44.539 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:44.539 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:44.539 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:44.539 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:44.797 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:44.797 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:44.797 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:44.797 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.797 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.797 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:44.797 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:44.797 [252/267] Linking target lib/librte_net.so.24.1 00:02:44.797 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:45.054 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:45.054 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:45.054 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:45.054 [257/267] Linking target lib/librte_security.so.24.1 00:02:45.054 [258/267] Linking target lib/librte_hash.so.24.1 00:02:45.054 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:45.313 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.313 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:45.572 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:45.572 [263/267] Linking target lib/librte_power.so.24.1 00:02:46.138 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.138 [265/267] Linking static target lib/librte_vhost.a 00:02:47.513 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.513 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:47.513 INFO: autodetecting backend as ninja 00:02:47.513 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:02.381 CC lib/ut_mock/mock.o 00:03:02.381 CC lib/log/log.o 00:03:02.381 CC lib/log/log_deprecated.o 00:03:02.381 CC lib/ut/ut.o 00:03:02.381 CC lib/log/log_flags.o 00:03:02.381 LIB libspdk_ut.a 00:03:02.381 LIB libspdk_ut_mock.a 00:03:02.381 LIB libspdk_log.a 00:03:02.381 SO libspdk_ut.so.2.0 00:03:02.381 SO libspdk_ut_mock.so.6.0 00:03:02.381 SO libspdk_log.so.7.1 00:03:02.381 SYMLINK libspdk_ut.so 00:03:02.381 SYMLINK libspdk_ut_mock.so 00:03:02.381 SYMLINK libspdk_log.so 00:03:02.381 CC lib/util/base64.o 00:03:02.381 CC lib/util/bit_array.o 00:03:02.381 CC lib/util/cpuset.o 00:03:02.381 CC lib/util/crc16.o 00:03:02.381 CC lib/util/crc32.o 00:03:02.381 CC lib/util/crc32c.o 00:03:02.381 CC lib/dma/dma.o 00:03:02.381 CXX lib/trace_parser/trace.o 00:03:02.381 CC lib/ioat/ioat.o 00:03:02.381 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.381 CC lib/util/crc32_ieee.o 00:03:02.381 CC lib/util/crc64.o 00:03:02.381 CC lib/util/dif.o 00:03:02.381 CC lib/util/fd.o 00:03:02.381 LIB libspdk_dma.a 00:03:02.381 CC lib/vfio_user/host/vfio_user.o 00:03:02.381 SO libspdk_dma.so.5.0 00:03:02.381 CC lib/util/fd_group.o 00:03:02.381 CC lib/util/file.o 00:03:02.381 CC lib/util/hexlify.o 00:03:02.381 SYMLINK libspdk_dma.so 00:03:02.381 CC lib/util/iov.o 00:03:02.381 LIB libspdk_ioat.a 00:03:02.381 SO libspdk_ioat.so.7.0 00:03:02.381 CC lib/util/math.o 00:03:02.381 CC lib/util/net.o 00:03:02.381 SYMLINK libspdk_ioat.so 00:03:02.381 CC lib/util/pipe.o 00:03:02.381 CC lib/util/strerror_tls.o 00:03:02.381 CC lib/util/string.o 00:03:02.381 CC lib/util/uuid.o 00:03:02.381 CC lib/util/xor.o 00:03:02.381 CC lib/util/zipf.o 00:03:02.381 LIB libspdk_vfio_user.a 00:03:02.381 CC lib/util/md5.o 00:03:02.381 SO libspdk_vfio_user.so.5.0 00:03:02.381 SYMLINK libspdk_vfio_user.so 00:03:02.381 LIB libspdk_util.a 00:03:02.381 SO libspdk_util.so.10.1 00:03:02.381 LIB libspdk_trace_parser.a 00:03:02.381 SO libspdk_trace_parser.so.6.0 00:03:02.381 SYMLINK libspdk_util.so 00:03:02.381 SYMLINK libspdk_trace_parser.so 00:03:02.381 CC lib/idxd/idxd.o 00:03:02.381 CC lib/idxd/idxd_kernel.o 00:03:02.381 CC lib/idxd/idxd_user.o 00:03:02.381 CC lib/conf/conf.o 00:03:02.381 CC lib/vmd/vmd.o 00:03:02.381 CC lib/vmd/led.o 00:03:02.381 CC lib/env_dpdk/env.o 00:03:02.381 CC lib/env_dpdk/memory.o 00:03:02.381 CC lib/json/json_parse.o 00:03:02.381 CC lib/rdma_utils/rdma_utils.o 00:03:02.381 CC lib/env_dpdk/pci.o 00:03:02.381 CC lib/json/json_util.o 00:03:02.381 LIB libspdk_conf.a 00:03:02.381 SO libspdk_conf.so.6.0 00:03:02.381 CC lib/json/json_write.o 00:03:02.381 CC lib/env_dpdk/init.o 00:03:02.381 SYMLINK libspdk_conf.so 00:03:02.381 CC lib/env_dpdk/threads.o 00:03:02.381 LIB libspdk_rdma_utils.a 00:03:02.381 SO libspdk_rdma_utils.so.1.0 00:03:02.381 SYMLINK libspdk_rdma_utils.so 00:03:02.381 CC lib/env_dpdk/pci_ioat.o 00:03:02.381 CC lib/env_dpdk/pci_virtio.o 00:03:02.381 CC lib/env_dpdk/pci_vmd.o 00:03:02.381 CC lib/env_dpdk/pci_idxd.o 00:03:02.381 CC lib/env_dpdk/pci_event.o 00:03:02.381 CC lib/env_dpdk/sigbus_handler.o 00:03:02.381 LIB libspdk_json.a 00:03:02.381 CC lib/env_dpdk/pci_dpdk.o 00:03:02.381 SO libspdk_json.so.6.0 00:03:02.381 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.381 SYMLINK libspdk_json.so 00:03:02.381 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.381 LIB libspdk_idxd.a 00:03:02.381 SO libspdk_idxd.so.12.1 00:03:02.381 CC lib/rdma_provider/common.o 00:03:02.381 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.381 LIB libspdk_vmd.a 00:03:02.381 SYMLINK libspdk_idxd.so 00:03:02.381 SO libspdk_vmd.so.6.0 00:03:02.381 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.381 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.381 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.381 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.381 SYMLINK libspdk_vmd.so 00:03:02.640 LIB libspdk_rdma_provider.a 00:03:02.640 SO libspdk_rdma_provider.so.7.0 00:03:02.640 SYMLINK libspdk_rdma_provider.so 00:03:02.640 LIB libspdk_jsonrpc.a 00:03:02.640 SO libspdk_jsonrpc.so.6.0 00:03:02.898 SYMLINK libspdk_jsonrpc.so 00:03:02.898 CC lib/rpc/rpc.o 00:03:03.157 LIB libspdk_env_dpdk.a 00:03:03.157 LIB libspdk_rpc.a 00:03:03.157 SO libspdk_rpc.so.6.0 00:03:03.157 SO libspdk_env_dpdk.so.15.1 00:03:03.157 SYMLINK libspdk_rpc.so 00:03:03.415 SYMLINK libspdk_env_dpdk.so 00:03:03.415 CC lib/notify/notify.o 00:03:03.415 CC lib/keyring/keyring.o 00:03:03.415 CC lib/notify/notify_rpc.o 00:03:03.415 CC lib/keyring/keyring_rpc.o 00:03:03.415 CC lib/trace/trace.o 00:03:03.415 CC lib/trace/trace_flags.o 00:03:03.415 CC lib/trace/trace_rpc.o 00:03:03.674 LIB libspdk_notify.a 00:03:03.674 SO libspdk_notify.so.6.0 00:03:03.674 LIB libspdk_keyring.a 00:03:03.674 SYMLINK libspdk_notify.so 00:03:03.674 SO libspdk_keyring.so.2.0 00:03:03.674 LIB libspdk_trace.a 00:03:03.674 SO libspdk_trace.so.11.0 00:03:03.674 SYMLINK libspdk_keyring.so 00:03:03.674 SYMLINK libspdk_trace.so 00:03:03.933 CC lib/thread/thread.o 00:03:03.933 CC lib/thread/iobuf.o 00:03:03.933 CC lib/sock/sock.o 00:03:03.933 CC lib/sock/sock_rpc.o 00:03:04.499 LIB libspdk_sock.a 00:03:04.499 SO libspdk_sock.so.10.0 00:03:04.499 SYMLINK libspdk_sock.so 00:03:04.757 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.757 CC lib/nvme/nvme_ctrlr.o 00:03:04.757 CC lib/nvme/nvme_fabric.o 00:03:04.757 CC lib/nvme/nvme_pcie_common.o 00:03:04.757 CC lib/nvme/nvme_ns.o 00:03:04.757 CC lib/nvme/nvme_ns_cmd.o 00:03:04.757 CC lib/nvme/nvme_pcie.o 00:03:04.757 CC lib/nvme/nvme.o 00:03:04.757 CC lib/nvme/nvme_qpair.o 00:03:05.322 LIB libspdk_thread.a 00:03:05.322 SO libspdk_thread.so.11.0 00:03:05.322 CC lib/nvme/nvme_quirks.o 00:03:05.322 CC lib/nvme/nvme_transport.o 00:03:05.322 CC lib/nvme/nvme_discovery.o 00:03:05.322 SYMLINK libspdk_thread.so 00:03:05.322 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.322 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.322 CC lib/nvme/nvme_tcp.o 00:03:05.580 CC lib/nvme/nvme_opal.o 00:03:05.580 CC lib/nvme/nvme_io_msg.o 00:03:05.580 CC lib/nvme/nvme_poll_group.o 00:03:05.580 CC lib/nvme/nvme_zns.o 00:03:05.839 CC lib/nvme/nvme_stubs.o 00:03:05.839 CC lib/nvme/nvme_auth.o 00:03:05.839 CC lib/accel/accel.o 00:03:05.839 CC lib/accel/accel_rpc.o 00:03:06.098 CC lib/nvme/nvme_cuse.o 00:03:06.098 CC lib/nvme/nvme_rdma.o 00:03:06.098 CC lib/accel/accel_sw.o 00:03:06.357 CC lib/blob/blobstore.o 00:03:06.357 CC lib/init/json_config.o 00:03:06.357 CC lib/blob/request.o 00:03:06.357 CC lib/virtio/virtio.o 00:03:06.615 CC lib/blob/zeroes.o 00:03:06.615 CC lib/init/subsystem.o 00:03:06.615 CC lib/blob/blob_bs_dev.o 00:03:06.615 CC lib/virtio/virtio_vhost_user.o 00:03:06.615 CC lib/init/subsystem_rpc.o 00:03:06.615 CC lib/init/rpc.o 00:03:06.874 CC lib/virtio/virtio_vfio_user.o 00:03:06.874 CC lib/virtio/virtio_pci.o 00:03:06.874 LIB libspdk_init.a 00:03:06.874 CC lib/fsdev/fsdev_io.o 00:03:06.874 CC lib/fsdev/fsdev.o 00:03:06.874 CC lib/fsdev/fsdev_rpc.o 00:03:06.874 SO libspdk_init.so.6.0 00:03:06.874 LIB libspdk_accel.a 00:03:06.874 SO libspdk_accel.so.16.0 00:03:06.874 SYMLINK libspdk_init.so 00:03:06.874 SYMLINK libspdk_accel.so 00:03:07.132 LIB libspdk_virtio.a 00:03:07.132 CC lib/event/reactor.o 00:03:07.132 CC lib/event/app.o 00:03:07.132 CC lib/event/app_rpc.o 00:03:07.132 CC lib/event/log_rpc.o 00:03:07.132 SO libspdk_virtio.so.7.0 00:03:07.132 CC lib/event/scheduler_static.o 00:03:07.132 CC lib/bdev/bdev.o 00:03:07.132 LIB libspdk_nvme.a 00:03:07.132 SYMLINK libspdk_virtio.so 00:03:07.132 CC lib/bdev/bdev_rpc.o 00:03:07.132 CC lib/bdev/bdev_zone.o 00:03:07.132 CC lib/bdev/part.o 00:03:07.389 SO libspdk_nvme.so.15.0 00:03:07.389 CC lib/bdev/scsi_nvme.o 00:03:07.389 LIB libspdk_fsdev.a 00:03:07.389 SO libspdk_fsdev.so.2.0 00:03:07.645 LIB libspdk_event.a 00:03:07.645 SYMLINK libspdk_fsdev.so 00:03:07.645 SYMLINK libspdk_nvme.so 00:03:07.645 SO libspdk_event.so.14.0 00:03:07.645 SYMLINK libspdk_event.so 00:03:07.901 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:08.463 LIB libspdk_fuse_dispatcher.a 00:03:08.463 SO libspdk_fuse_dispatcher.so.1.0 00:03:08.463 SYMLINK libspdk_fuse_dispatcher.so 00:03:09.395 LIB libspdk_blob.a 00:03:09.396 SO libspdk_blob.so.12.0 00:03:09.396 SYMLINK libspdk_blob.so 00:03:09.653 CC lib/lvol/lvol.o 00:03:09.653 CC lib/blobfs/blobfs.o 00:03:09.653 CC lib/blobfs/tree.o 00:03:09.909 LIB libspdk_bdev.a 00:03:10.165 SO libspdk_bdev.so.17.0 00:03:10.165 SYMLINK libspdk_bdev.so 00:03:10.422 CC lib/ftl/ftl_core.o 00:03:10.422 CC lib/ftl/ftl_init.o 00:03:10.422 CC lib/nvmf/ctrlr.o 00:03:10.422 CC lib/nvmf/ctrlr_discovery.o 00:03:10.422 CC lib/scsi/dev.o 00:03:10.422 CC lib/nvmf/ctrlr_bdev.o 00:03:10.422 CC lib/nbd/nbd.o 00:03:10.422 CC lib/ublk/ublk.o 00:03:10.422 LIB libspdk_lvol.a 00:03:10.422 SO libspdk_lvol.so.11.0 00:03:10.422 SYMLINK libspdk_lvol.so 00:03:10.422 CC lib/ublk/ublk_rpc.o 00:03:10.422 CC lib/ftl/ftl_layout.o 00:03:10.422 CC lib/scsi/lun.o 00:03:10.422 LIB libspdk_blobfs.a 00:03:10.422 SO libspdk_blobfs.so.11.0 00:03:10.707 CC lib/nvmf/subsystem.o 00:03:10.707 CC lib/ftl/ftl_debug.o 00:03:10.707 SYMLINK libspdk_blobfs.so 00:03:10.707 CC lib/nbd/nbd_rpc.o 00:03:10.707 CC lib/nvmf/nvmf.o 00:03:10.707 CC lib/ftl/ftl_io.o 00:03:10.707 LIB libspdk_nbd.a 00:03:10.707 CC lib/nvmf/nvmf_rpc.o 00:03:10.707 SO libspdk_nbd.so.7.0 00:03:10.707 CC lib/scsi/port.o 00:03:10.707 CC lib/scsi/scsi.o 00:03:10.969 SYMLINK libspdk_nbd.so 00:03:10.969 CC lib/scsi/scsi_bdev.o 00:03:10.969 CC lib/scsi/scsi_pr.o 00:03:10.969 CC lib/nvmf/transport.o 00:03:10.969 CC lib/ftl/ftl_sb.o 00:03:10.969 LIB libspdk_ublk.a 00:03:10.969 SO libspdk_ublk.so.3.0 00:03:10.969 CC lib/ftl/ftl_l2p.o 00:03:10.969 SYMLINK libspdk_ublk.so 00:03:10.969 CC lib/ftl/ftl_l2p_flat.o 00:03:11.226 CC lib/ftl/ftl_nv_cache.o 00:03:11.226 CC lib/scsi/scsi_rpc.o 00:03:11.226 CC lib/ftl/ftl_band.o 00:03:11.226 CC lib/scsi/task.o 00:03:11.226 CC lib/ftl/ftl_band_ops.o 00:03:11.226 CC lib/nvmf/tcp.o 00:03:11.226 CC lib/nvmf/stubs.o 00:03:11.483 LIB libspdk_scsi.a 00:03:11.483 SO libspdk_scsi.so.9.0 00:03:11.483 CC lib/ftl/ftl_writer.o 00:03:11.483 CC lib/nvmf/mdns_server.o 00:03:11.483 SYMLINK libspdk_scsi.so 00:03:11.483 CC lib/ftl/ftl_rq.o 00:03:11.483 CC lib/nvmf/rdma.o 00:03:11.483 CC lib/ftl/ftl_reloc.o 00:03:11.483 CC lib/ftl/ftl_l2p_cache.o 00:03:11.741 CC lib/ftl/ftl_p2l.o 00:03:11.741 CC lib/ftl/ftl_p2l_log.o 00:03:11.741 CC lib/iscsi/conn.o 00:03:11.998 CC lib/iscsi/init_grp.o 00:03:11.998 CC lib/iscsi/iscsi.o 00:03:11.998 CC lib/iscsi/param.o 00:03:11.998 CC lib/iscsi/portal_grp.o 00:03:12.256 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.256 CC lib/iscsi/tgt_node.o 00:03:12.256 CC lib/iscsi/iscsi_subsystem.o 00:03:12.256 CC lib/vhost/vhost.o 00:03:12.256 CC lib/vhost/vhost_rpc.o 00:03:12.256 CC lib/vhost/vhost_scsi.o 00:03:12.514 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.514 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.514 CC lib/vhost/vhost_blk.o 00:03:12.514 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.514 CC lib/vhost/rte_vhost_user.o 00:03:12.514 CC lib/iscsi/iscsi_rpc.o 00:03:12.514 CC lib/iscsi/task.o 00:03:12.771 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.772 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.772 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.030 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.030 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.030 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.030 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.030 CC lib/nvmf/auth.o 00:03:13.030 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.030 LIB libspdk_iscsi.a 00:03:13.288 SO libspdk_iscsi.so.8.0 00:03:13.288 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.288 CC lib/ftl/utils/ftl_conf.o 00:03:13.288 CC lib/ftl/utils/ftl_md.o 00:03:13.288 CC lib/ftl/utils/ftl_mempool.o 00:03:13.288 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.288 SYMLINK libspdk_iscsi.so 00:03:13.288 CC lib/ftl/utils/ftl_property.o 00:03:13.288 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.288 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.288 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.288 LIB libspdk_vhost.a 00:03:13.288 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.546 SO libspdk_vhost.so.8.0 00:03:13.546 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.546 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.546 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.546 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.546 SYMLINK libspdk_vhost.so 00:03:13.546 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.546 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.546 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:13.546 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:13.546 CC lib/ftl/base/ftl_base_dev.o 00:03:13.546 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.804 CC lib/ftl/ftl_trace.o 00:03:13.804 LIB libspdk_nvmf.a 00:03:13.804 LIB libspdk_ftl.a 00:03:14.064 SO libspdk_nvmf.so.20.0 00:03:14.064 SO libspdk_ftl.so.9.0 00:03:14.064 SYMLINK libspdk_nvmf.so 00:03:14.323 SYMLINK libspdk_ftl.so 00:03:14.582 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.582 CC module/fsdev/aio/fsdev_aio.o 00:03:14.582 CC module/accel/error/accel_error.o 00:03:14.582 CC module/accel/ioat/accel_ioat.o 00:03:14.582 CC module/accel/dsa/accel_dsa.o 00:03:14.582 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.582 CC module/sock/posix/posix.o 00:03:14.582 CC module/keyring/file/keyring.o 00:03:14.582 CC module/blob/bdev/blob_bdev.o 00:03:14.582 CC module/accel/iaa/accel_iaa.o 00:03:14.582 LIB libspdk_env_dpdk_rpc.a 00:03:14.582 SO libspdk_env_dpdk_rpc.so.6.0 00:03:14.582 SYMLINK libspdk_env_dpdk_rpc.so 00:03:14.582 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.842 CC module/keyring/file/keyring_rpc.o 00:03:14.842 LIB libspdk_scheduler_dynamic.a 00:03:14.842 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.842 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.842 CC module/accel/error/accel_error_rpc.o 00:03:14.842 LIB libspdk_accel_iaa.a 00:03:14.842 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.842 SO libspdk_accel_iaa.so.3.0 00:03:14.842 LIB libspdk_blob_bdev.a 00:03:14.842 SYMLINK libspdk_accel_iaa.so 00:03:14.842 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.842 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.842 LIB libspdk_keyring_file.a 00:03:14.842 SO libspdk_blob_bdev.so.12.0 00:03:14.842 LIB libspdk_accel_ioat.a 00:03:14.842 SO libspdk_keyring_file.so.2.0 00:03:14.842 LIB libspdk_accel_error.a 00:03:14.842 SO libspdk_accel_ioat.so.6.0 00:03:14.842 SO libspdk_accel_error.so.2.0 00:03:14.842 SYMLINK libspdk_blob_bdev.so 00:03:14.842 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.842 SYMLINK libspdk_keyring_file.so 00:03:14.842 SYMLINK libspdk_accel_ioat.so 00:03:14.842 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:14.842 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.842 LIB libspdk_accel_dsa.a 00:03:14.842 SYMLINK libspdk_accel_error.so 00:03:15.100 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.100 SO libspdk_accel_dsa.so.5.0 00:03:15.100 CC module/keyring/linux/keyring.o 00:03:15.100 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.100 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.100 SYMLINK libspdk_accel_dsa.so 00:03:15.100 LIB libspdk_scheduler_gscheduler.a 00:03:15.100 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.100 CC module/keyring/linux/keyring_rpc.o 00:03:15.100 SYMLINK libspdk_scheduler_gscheduler.so 00:03:15.100 CC module/bdev/delay/vbdev_delay.o 00:03:15.100 CC module/blobfs/bdev/blobfs_bdev.o 00:03:15.100 LIB libspdk_sock_posix.a 00:03:15.100 CC module/bdev/error/vbdev_error.o 00:03:15.100 CC module/bdev/gpt/gpt.o 00:03:15.100 SO libspdk_sock_posix.so.6.0 00:03:15.100 CC module/bdev/lvol/vbdev_lvol.o 00:03:15.100 LIB libspdk_keyring_linux.a 00:03:15.360 SO libspdk_keyring_linux.so.1.0 00:03:15.360 LIB libspdk_fsdev_aio.a 00:03:15.360 CC module/bdev/malloc/bdev_malloc.o 00:03:15.360 SYMLINK libspdk_sock_posix.so 00:03:15.360 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:15.360 CC module/bdev/null/bdev_null.o 00:03:15.360 SYMLINK libspdk_keyring_linux.so 00:03:15.360 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.360 CC module/bdev/null/bdev_null_rpc.o 00:03:15.360 SO libspdk_fsdev_aio.so.1.0 00:03:15.360 CC module/bdev/gpt/vbdev_gpt.o 00:03:15.360 SYMLINK libspdk_fsdev_aio.so 00:03:15.360 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:15.360 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.360 LIB libspdk_blobfs_bdev.a 00:03:15.360 SO libspdk_blobfs_bdev.so.6.0 00:03:15.618 LIB libspdk_bdev_delay.a 00:03:15.618 LIB libspdk_bdev_null.a 00:03:15.618 SYMLINK libspdk_blobfs_bdev.so 00:03:15.618 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.618 SO libspdk_bdev_delay.so.6.0 00:03:15.618 LIB libspdk_bdev_error.a 00:03:15.618 SO libspdk_bdev_null.so.6.0 00:03:15.618 SO libspdk_bdev_error.so.6.0 00:03:15.618 SYMLINK libspdk_bdev_null.so 00:03:15.618 SYMLINK libspdk_bdev_delay.so 00:03:15.618 CC module/bdev/nvme/bdev_nvme.o 00:03:15.618 SYMLINK libspdk_bdev_error.so 00:03:15.618 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.618 LIB libspdk_bdev_gpt.a 00:03:15.618 LIB libspdk_bdev_malloc.a 00:03:15.618 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.618 SO libspdk_bdev_gpt.so.6.0 00:03:15.618 SO libspdk_bdev_malloc.so.6.0 00:03:15.618 CC module/bdev/nvme/nvme_rpc.o 00:03:15.618 SYMLINK libspdk_bdev_gpt.so 00:03:15.618 LIB libspdk_bdev_lvol.a 00:03:15.618 SYMLINK libspdk_bdev_malloc.so 00:03:15.618 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.618 CC module/bdev/raid/bdev_raid.o 00:03:15.618 CC module/bdev/split/vbdev_split.o 00:03:15.618 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.618 SO libspdk_bdev_lvol.so.6.0 00:03:15.877 SYMLINK libspdk_bdev_lvol.so 00:03:15.877 CC module/bdev/xnvme/bdev_xnvme.o 00:03:15.877 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.877 LIB libspdk_bdev_passthru.a 00:03:15.877 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:15.877 SO libspdk_bdev_passthru.so.6.0 00:03:15.877 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.877 CC module/bdev/aio/bdev_aio.o 00:03:15.877 SYMLINK libspdk_bdev_passthru.so 00:03:15.877 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.136 CC module/bdev/nvme/vbdev_opal.o 00:03:16.136 LIB libspdk_bdev_split.a 00:03:16.136 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.136 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.136 SO libspdk_bdev_split.so.6.0 00:03:16.136 LIB libspdk_bdev_xnvme.a 00:03:16.136 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.136 SO libspdk_bdev_xnvme.so.3.0 00:03:16.136 SYMLINK libspdk_bdev_split.so 00:03:16.136 CC module/bdev/raid/bdev_raid_sb.o 00:03:16.136 CC module/bdev/raid/raid0.o 00:03:16.136 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.136 SYMLINK libspdk_bdev_xnvme.so 00:03:16.136 LIB libspdk_bdev_zone_block.a 00:03:16.136 SO libspdk_bdev_zone_block.so.6.0 00:03:16.396 CC module/bdev/raid/raid1.o 00:03:16.396 CC module/bdev/raid/concat.o 00:03:16.396 SYMLINK libspdk_bdev_zone_block.so 00:03:16.396 LIB libspdk_bdev_aio.a 00:03:16.396 SO libspdk_bdev_aio.so.6.0 00:03:16.396 CC module/bdev/ftl/bdev_ftl.o 00:03:16.396 SYMLINK libspdk_bdev_aio.so 00:03:16.396 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.396 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.396 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.396 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.396 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.396 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.655 LIB libspdk_bdev_raid.a 00:03:16.655 SO libspdk_bdev_raid.so.6.0 00:03:16.655 LIB libspdk_bdev_ftl.a 00:03:16.655 SO libspdk_bdev_ftl.so.6.0 00:03:16.655 SYMLINK libspdk_bdev_raid.so 00:03:16.655 SYMLINK libspdk_bdev_ftl.so 00:03:16.655 LIB libspdk_bdev_iscsi.a 00:03:16.655 SO libspdk_bdev_iscsi.so.6.0 00:03:16.915 SYMLINK libspdk_bdev_iscsi.so 00:03:16.915 LIB libspdk_bdev_virtio.a 00:03:16.915 SO libspdk_bdev_virtio.so.6.0 00:03:16.915 SYMLINK libspdk_bdev_virtio.so 00:03:18.300 LIB libspdk_bdev_nvme.a 00:03:18.300 SO libspdk_bdev_nvme.so.7.1 00:03:18.300 SYMLINK libspdk_bdev_nvme.so 00:03:18.558 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.815 CC module/event/subsystems/keyring/keyring.o 00:03:18.815 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.815 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.815 CC module/event/subsystems/vmd/vmd.o 00:03:18.815 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.815 CC module/event/subsystems/sock/sock.o 00:03:18.815 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.815 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.815 LIB libspdk_event_keyring.a 00:03:18.815 LIB libspdk_event_vmd.a 00:03:18.815 LIB libspdk_event_vhost_blk.a 00:03:18.816 SO libspdk_event_keyring.so.1.0 00:03:18.816 LIB libspdk_event_scheduler.a 00:03:18.816 LIB libspdk_event_fsdev.a 00:03:18.816 LIB libspdk_event_sock.a 00:03:18.816 LIB libspdk_event_iobuf.a 00:03:18.816 SO libspdk_event_vmd.so.6.0 00:03:18.816 SO libspdk_event_vhost_blk.so.3.0 00:03:18.816 SO libspdk_event_fsdev.so.1.0 00:03:18.816 SO libspdk_event_scheduler.so.4.0 00:03:18.816 SO libspdk_event_sock.so.5.0 00:03:18.816 SO libspdk_event_iobuf.so.3.0 00:03:18.816 SYMLINK libspdk_event_keyring.so 00:03:18.816 SYMLINK libspdk_event_vmd.so 00:03:18.816 SYMLINK libspdk_event_vhost_blk.so 00:03:18.816 SYMLINK libspdk_event_sock.so 00:03:18.816 SYMLINK libspdk_event_fsdev.so 00:03:18.816 SYMLINK libspdk_event_scheduler.so 00:03:18.816 SYMLINK libspdk_event_iobuf.so 00:03:19.073 CC module/event/subsystems/accel/accel.o 00:03:19.332 LIB libspdk_event_accel.a 00:03:19.332 SO libspdk_event_accel.so.6.0 00:03:19.332 SYMLINK libspdk_event_accel.so 00:03:19.590 CC module/event/subsystems/bdev/bdev.o 00:03:19.590 LIB libspdk_event_bdev.a 00:03:19.848 SO libspdk_event_bdev.so.6.0 00:03:19.848 SYMLINK libspdk_event_bdev.so 00:03:19.848 CC module/event/subsystems/scsi/scsi.o 00:03:19.848 CC module/event/subsystems/nbd/nbd.o 00:03:19.848 CC module/event/subsystems/ublk/ublk.o 00:03:19.848 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.848 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.106 LIB libspdk_event_ublk.a 00:03:20.106 LIB libspdk_event_nbd.a 00:03:20.106 SO libspdk_event_ublk.so.3.0 00:03:20.106 SO libspdk_event_nbd.so.6.0 00:03:20.106 LIB libspdk_event_scsi.a 00:03:20.106 SO libspdk_event_scsi.so.6.0 00:03:20.106 SYMLINK libspdk_event_nbd.so 00:03:20.106 SYMLINK libspdk_event_ublk.so 00:03:20.106 SYMLINK libspdk_event_scsi.so 00:03:20.106 LIB libspdk_event_nvmf.a 00:03:20.106 SO libspdk_event_nvmf.so.6.0 00:03:20.107 SYMLINK libspdk_event_nvmf.so 00:03:20.365 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.365 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.365 LIB libspdk_event_vhost_scsi.a 00:03:20.365 LIB libspdk_event_iscsi.a 00:03:20.624 SO libspdk_event_iscsi.so.6.0 00:03:20.624 SO libspdk_event_vhost_scsi.so.3.0 00:03:20.624 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.624 SYMLINK libspdk_event_iscsi.so 00:03:20.624 SO libspdk.so.6.0 00:03:20.624 SYMLINK libspdk.so 00:03:20.882 CC app/spdk_lspci/spdk_lspci.o 00:03:20.883 CC app/spdk_nvme_identify/identify.o 00:03:20.883 CXX app/trace/trace.o 00:03:20.883 CC app/trace_record/trace_record.o 00:03:20.883 CC app/spdk_nvme_perf/perf.o 00:03:20.883 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.883 CC app/nvmf_tgt/nvmf_main.o 00:03:20.883 CC app/spdk_tgt/spdk_tgt.o 00:03:20.883 CC examples/util/zipf/zipf.o 00:03:20.883 CC test/thread/poller_perf/poller_perf.o 00:03:20.883 LINK spdk_lspci 00:03:21.141 LINK zipf 00:03:21.141 LINK nvmf_tgt 00:03:21.141 LINK spdk_tgt 00:03:21.141 LINK iscsi_tgt 00:03:21.141 LINK poller_perf 00:03:21.141 LINK spdk_trace_record 00:03:21.141 CC app/spdk_nvme_discover/discovery_aer.o 00:03:21.141 LINK spdk_trace 00:03:21.399 CC examples/ioat/perf/perf.o 00:03:21.399 CC examples/ioat/verify/verify.o 00:03:21.399 CC app/spdk_top/spdk_top.o 00:03:21.399 CC app/spdk_dd/spdk_dd.o 00:03:21.399 LINK spdk_nvme_discover 00:03:21.399 CC test/dma/test_dma/test_dma.o 00:03:21.399 CC app/fio/nvme/fio_plugin.o 00:03:21.399 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.399 LINK verify 00:03:21.399 LINK ioat_perf 00:03:21.657 CC examples/vmd/led/led.o 00:03:21.657 LINK lsvmd 00:03:21.657 LINK spdk_nvme_identify 00:03:21.657 LINK led 00:03:21.657 LINK spdk_dd 00:03:21.657 LINK spdk_nvme_perf 00:03:21.657 CC app/vhost/vhost.o 00:03:21.657 CC examples/idxd/perf/perf.o 00:03:21.657 LINK test_dma 00:03:21.914 CC app/fio/bdev/fio_plugin.o 00:03:21.914 LINK vhost 00:03:21.914 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:21.914 CC test/app/bdev_svc/bdev_svc.o 00:03:21.914 TEST_HEADER include/spdk/accel.h 00:03:21.914 TEST_HEADER include/spdk/accel_module.h 00:03:21.914 TEST_HEADER include/spdk/assert.h 00:03:21.914 TEST_HEADER include/spdk/barrier.h 00:03:21.914 TEST_HEADER include/spdk/base64.h 00:03:21.914 TEST_HEADER include/spdk/bdev.h 00:03:21.914 TEST_HEADER include/spdk/bdev_module.h 00:03:21.914 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.914 TEST_HEADER include/spdk/bit_array.h 00:03:21.914 TEST_HEADER include/spdk/bit_pool.h 00:03:21.914 LINK spdk_nvme 00:03:21.914 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.914 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.914 TEST_HEADER include/spdk/blobfs.h 00:03:21.914 TEST_HEADER include/spdk/blob.h 00:03:21.914 TEST_HEADER include/spdk/conf.h 00:03:21.914 TEST_HEADER include/spdk/config.h 00:03:21.914 TEST_HEADER include/spdk/cpuset.h 00:03:21.914 TEST_HEADER include/spdk/crc16.h 00:03:21.914 TEST_HEADER include/spdk/crc32.h 00:03:21.914 TEST_HEADER include/spdk/crc64.h 00:03:21.914 TEST_HEADER include/spdk/dif.h 00:03:21.914 TEST_HEADER include/spdk/dma.h 00:03:21.914 TEST_HEADER include/spdk/endian.h 00:03:21.914 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.914 TEST_HEADER include/spdk/env.h 00:03:21.914 TEST_HEADER include/spdk/event.h 00:03:21.914 TEST_HEADER include/spdk/fd_group.h 00:03:21.914 TEST_HEADER include/spdk/fd.h 00:03:21.914 TEST_HEADER include/spdk/file.h 00:03:21.914 TEST_HEADER include/spdk/fsdev.h 00:03:21.914 TEST_HEADER include/spdk/fsdev_module.h 00:03:21.914 TEST_HEADER include/spdk/ftl.h 00:03:21.914 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:21.914 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.914 TEST_HEADER include/spdk/hexlify.h 00:03:21.914 TEST_HEADER include/spdk/histogram_data.h 00:03:21.914 TEST_HEADER include/spdk/idxd.h 00:03:21.914 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.914 TEST_HEADER include/spdk/init.h 00:03:21.914 TEST_HEADER include/spdk/ioat.h 00:03:21.914 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.914 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.914 TEST_HEADER include/spdk/json.h 00:03:21.914 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.914 TEST_HEADER include/spdk/keyring.h 00:03:21.914 TEST_HEADER include/spdk/keyring_module.h 00:03:21.914 TEST_HEADER include/spdk/likely.h 00:03:21.914 TEST_HEADER include/spdk/log.h 00:03:21.914 TEST_HEADER include/spdk/lvol.h 00:03:21.914 TEST_HEADER include/spdk/md5.h 00:03:21.914 TEST_HEADER include/spdk/memory.h 00:03:21.914 TEST_HEADER include/spdk/mmio.h 00:03:21.914 TEST_HEADER include/spdk/nbd.h 00:03:21.914 TEST_HEADER include/spdk/net.h 00:03:21.914 TEST_HEADER include/spdk/notify.h 00:03:21.914 TEST_HEADER include/spdk/nvme.h 00:03:21.914 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.222 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.222 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.222 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.222 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.222 CC examples/thread/thread/thread_ex.o 00:03:22.222 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.222 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.222 TEST_HEADER include/spdk/nvmf.h 00:03:22.222 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.222 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.222 TEST_HEADER include/spdk/opal.h 00:03:22.222 TEST_HEADER include/spdk/opal_spec.h 00:03:22.222 TEST_HEADER include/spdk/pci_ids.h 00:03:22.222 TEST_HEADER include/spdk/pipe.h 00:03:22.222 TEST_HEADER include/spdk/queue.h 00:03:22.222 TEST_HEADER include/spdk/reduce.h 00:03:22.222 TEST_HEADER include/spdk/rpc.h 00:03:22.222 TEST_HEADER include/spdk/scheduler.h 00:03:22.222 TEST_HEADER include/spdk/scsi.h 00:03:22.222 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.222 LINK interrupt_tgt 00:03:22.222 TEST_HEADER include/spdk/sock.h 00:03:22.222 TEST_HEADER include/spdk/stdinc.h 00:03:22.222 TEST_HEADER include/spdk/string.h 00:03:22.222 LINK bdev_svc 00:03:22.222 TEST_HEADER include/spdk/thread.h 00:03:22.222 CC examples/sock/hello_world/hello_sock.o 00:03:22.222 TEST_HEADER include/spdk/trace.h 00:03:22.222 TEST_HEADER include/spdk/trace_parser.h 00:03:22.222 TEST_HEADER include/spdk/tree.h 00:03:22.222 TEST_HEADER include/spdk/ublk.h 00:03:22.222 TEST_HEADER include/spdk/util.h 00:03:22.222 TEST_HEADER include/spdk/uuid.h 00:03:22.222 TEST_HEADER include/spdk/version.h 00:03:22.222 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.222 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.222 TEST_HEADER include/spdk/vhost.h 00:03:22.222 TEST_HEADER include/spdk/vmd.h 00:03:22.222 TEST_HEADER include/spdk/xor.h 00:03:22.222 TEST_HEADER include/spdk/zipf.h 00:03:22.222 LINK idxd_perf 00:03:22.222 CXX test/cpp_headers/accel.o 00:03:22.222 LINK spdk_bdev 00:03:22.222 CC test/event/event_perf/event_perf.o 00:03:22.222 LINK spdk_top 00:03:22.222 CXX test/cpp_headers/accel_module.o 00:03:22.222 CC test/env/mem_callbacks/mem_callbacks.o 00:03:22.222 LINK thread 00:03:22.222 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:22.222 CC test/env/vtophys/vtophys.o 00:03:22.490 LINK hello_sock 00:03:22.490 LINK event_perf 00:03:22.490 CC test/env/memory/memory_ut.o 00:03:22.490 CXX test/cpp_headers/assert.o 00:03:22.490 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.490 LINK vtophys 00:03:22.490 LINK env_dpdk_post_init 00:03:22.490 CC test/rpc_client/rpc_client_test.o 00:03:22.490 CC test/nvme/aer/aer.o 00:03:22.490 CC test/event/reactor/reactor.o 00:03:22.490 CXX test/cpp_headers/barrier.o 00:03:22.490 CC examples/accel/perf/accel_perf.o 00:03:22.748 LINK reactor 00:03:22.748 CXX test/cpp_headers/base64.o 00:03:22.748 LINK rpc_client_test 00:03:22.748 CC examples/blob/hello_world/hello_blob.o 00:03:22.748 CC test/accel/dif/dif.o 00:03:22.748 LINK mem_callbacks 00:03:22.748 LINK nvme_fuzz 00:03:22.748 LINK aer 00:03:22.748 CXX test/cpp_headers/bdev.o 00:03:22.748 CC test/event/reactor_perf/reactor_perf.o 00:03:23.006 CC examples/blob/cli/blobcli.o 00:03:23.006 LINK hello_blob 00:03:23.006 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.006 LINK reactor_perf 00:03:23.006 CC test/nvme/reset/reset.o 00:03:23.006 CXX test/cpp_headers/bdev_module.o 00:03:23.006 LINK accel_perf 00:03:23.006 CC test/blobfs/mkfs/mkfs.o 00:03:23.006 CC test/event/app_repeat/app_repeat.o 00:03:23.263 CXX test/cpp_headers/bdev_zone.o 00:03:23.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.263 LINK mkfs 00:03:23.263 LINK reset 00:03:23.263 LINK app_repeat 00:03:23.263 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.263 CXX test/cpp_headers/bit_array.o 00:03:23.263 CC test/lvol/esnap/esnap.o 00:03:23.521 LINK blobcli 00:03:23.521 CC test/nvme/sgl/sgl.o 00:03:23.521 CXX test/cpp_headers/bit_pool.o 00:03:23.521 LINK dif 00:03:23.521 CC test/event/scheduler/scheduler.o 00:03:23.521 LINK memory_ut 00:03:23.521 CC examples/nvme/hello_world/hello_world.o 00:03:23.521 CXX test/cpp_headers/blob_bdev.o 00:03:23.778 CC test/env/pci/pci_ut.o 00:03:23.778 LINK sgl 00:03:23.778 CC examples/nvme/reconnect/reconnect.o 00:03:23.778 LINK vhost_fuzz 00:03:23.778 LINK hello_world 00:03:23.778 LINK scheduler 00:03:23.778 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:23.778 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.778 CC test/nvme/e2edp/nvme_dp.o 00:03:23.778 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.778 CC test/nvme/overhead/overhead.o 00:03:23.778 CXX test/cpp_headers/blobfs.o 00:03:24.036 CC test/nvme/err_injection/err_injection.o 00:03:24.036 LINK hello_fsdev 00:03:24.036 LINK pci_ut 00:03:24.036 LINK reconnect 00:03:24.036 CXX test/cpp_headers/blob.o 00:03:24.036 LINK err_injection 00:03:24.036 LINK nvme_dp 00:03:24.036 CC test/nvme/startup/startup.o 00:03:24.036 LINK overhead 00:03:24.294 CXX test/cpp_headers/conf.o 00:03:24.294 CXX test/cpp_headers/config.o 00:03:24.294 CC test/nvme/reserve/reserve.o 00:03:24.294 LINK nvme_manage 00:03:24.294 CXX test/cpp_headers/cpuset.o 00:03:24.294 CC test/nvme/simple_copy/simple_copy.o 00:03:24.294 CC test/bdev/bdevio/bdevio.o 00:03:24.294 LINK startup 00:03:24.294 CC test/nvme/connect_stress/connect_stress.o 00:03:24.294 CC test/nvme/boot_partition/boot_partition.o 00:03:24.294 LINK reserve 00:03:24.552 CC examples/nvme/arbitration/arbitration.o 00:03:24.552 CXX test/cpp_headers/crc16.o 00:03:24.552 LINK boot_partition 00:03:24.552 LINK connect_stress 00:03:24.552 CC examples/nvme/hotplug/hotplug.o 00:03:24.552 LINK simple_copy 00:03:24.552 CXX test/cpp_headers/crc32.o 00:03:24.552 CXX test/cpp_headers/crc64.o 00:03:24.552 CC test/app/histogram_perf/histogram_perf.o 00:03:24.552 LINK bdevio 00:03:24.552 LINK iscsi_fuzz 00:03:24.811 CC test/app/jsoncat/jsoncat.o 00:03:24.811 CC test/nvme/compliance/nvme_compliance.o 00:03:24.811 CXX test/cpp_headers/dif.o 00:03:24.811 LINK hotplug 00:03:24.811 LINK histogram_perf 00:03:24.811 LINK arbitration 00:03:24.811 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.811 CXX test/cpp_headers/dma.o 00:03:24.811 LINK jsoncat 00:03:24.811 CXX test/cpp_headers/endian.o 00:03:24.811 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.811 CC test/nvme/fdp/fdp.o 00:03:24.811 CXX test/cpp_headers/env_dpdk.o 00:03:24.811 CC test/nvme/cuse/cuse.o 00:03:25.069 LINK fused_ordering 00:03:25.069 LINK nvme_compliance 00:03:25.069 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.069 CC test/app/stub/stub.o 00:03:25.069 CXX test/cpp_headers/env.o 00:03:25.069 LINK doorbell_aers 00:03:25.069 LINK stub 00:03:25.069 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.069 LINK cmb_copy 00:03:25.069 CC examples/bdev/bdevperf/bdevperf.o 00:03:25.069 CC examples/nvme/abort/abort.o 00:03:25.069 CXX test/cpp_headers/event.o 00:03:25.069 LINK fdp 00:03:25.327 CXX test/cpp_headers/fd_group.o 00:03:25.327 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.327 CXX test/cpp_headers/fd.o 00:03:25.327 LINK hello_bdev 00:03:25.327 CXX test/cpp_headers/file.o 00:03:25.327 CXX test/cpp_headers/fsdev.o 00:03:25.327 CXX test/cpp_headers/fsdev_module.o 00:03:25.327 LINK pmr_persistence 00:03:25.327 CXX test/cpp_headers/ftl.o 00:03:25.327 LINK abort 00:03:25.327 CXX test/cpp_headers/fuse_dispatcher.o 00:03:25.327 CXX test/cpp_headers/gpt_spec.o 00:03:25.584 CXX test/cpp_headers/hexlify.o 00:03:25.584 CXX test/cpp_headers/histogram_data.o 00:03:25.584 CXX test/cpp_headers/idxd.o 00:03:25.584 CXX test/cpp_headers/idxd_spec.o 00:03:25.584 CXX test/cpp_headers/init.o 00:03:25.584 CXX test/cpp_headers/ioat.o 00:03:25.584 CXX test/cpp_headers/ioat_spec.o 00:03:25.584 CXX test/cpp_headers/iscsi_spec.o 00:03:25.584 CXX test/cpp_headers/json.o 00:03:25.584 CXX test/cpp_headers/jsonrpc.o 00:03:25.584 CXX test/cpp_headers/keyring.o 00:03:25.584 CXX test/cpp_headers/keyring_module.o 00:03:25.584 CXX test/cpp_headers/likely.o 00:03:25.841 CXX test/cpp_headers/log.o 00:03:25.841 CXX test/cpp_headers/lvol.o 00:03:25.841 CXX test/cpp_headers/md5.o 00:03:25.841 CXX test/cpp_headers/memory.o 00:03:25.841 CXX test/cpp_headers/mmio.o 00:03:25.841 CXX test/cpp_headers/nbd.o 00:03:25.841 CXX test/cpp_headers/net.o 00:03:25.841 CXX test/cpp_headers/notify.o 00:03:25.841 CXX test/cpp_headers/nvme.o 00:03:25.841 CXX test/cpp_headers/nvme_intel.o 00:03:25.841 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.841 LINK bdevperf 00:03:25.841 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.842 CXX test/cpp_headers/nvme_spec.o 00:03:25.842 CXX test/cpp_headers/nvme_zns.o 00:03:25.842 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.099 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.099 CXX test/cpp_headers/nvmf.o 00:03:26.099 CXX test/cpp_headers/nvmf_spec.o 00:03:26.099 CXX test/cpp_headers/nvmf_transport.o 00:03:26.099 CXX test/cpp_headers/opal.o 00:03:26.099 CXX test/cpp_headers/opal_spec.o 00:03:26.099 CXX test/cpp_headers/pci_ids.o 00:03:26.099 CXX test/cpp_headers/pipe.o 00:03:26.100 CXX test/cpp_headers/queue.o 00:03:26.100 LINK cuse 00:03:26.100 CXX test/cpp_headers/reduce.o 00:03:26.100 CXX test/cpp_headers/rpc.o 00:03:26.100 CC examples/nvmf/nvmf/nvmf.o 00:03:26.100 CXX test/cpp_headers/scheduler.o 00:03:26.100 CXX test/cpp_headers/scsi.o 00:03:26.100 CXX test/cpp_headers/scsi_spec.o 00:03:26.100 CXX test/cpp_headers/sock.o 00:03:26.358 CXX test/cpp_headers/stdinc.o 00:03:26.358 CXX test/cpp_headers/string.o 00:03:26.358 CXX test/cpp_headers/thread.o 00:03:26.358 CXX test/cpp_headers/trace.o 00:03:26.358 CXX test/cpp_headers/trace_parser.o 00:03:26.358 CXX test/cpp_headers/tree.o 00:03:26.358 CXX test/cpp_headers/ublk.o 00:03:26.358 CXX test/cpp_headers/util.o 00:03:26.358 CXX test/cpp_headers/uuid.o 00:03:26.358 CXX test/cpp_headers/version.o 00:03:26.358 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.358 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.358 CXX test/cpp_headers/vhost.o 00:03:26.358 CXX test/cpp_headers/vmd.o 00:03:26.358 CXX test/cpp_headers/xor.o 00:03:26.358 CXX test/cpp_headers/zipf.o 00:03:26.358 LINK nvmf 00:03:28.265 LINK esnap 00:03:28.265 00:03:28.265 real 1m5.020s 00:03:28.265 user 6m3.559s 00:03:28.265 sys 1m6.454s 00:03:28.265 17:20:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:28.265 ************************************ 00:03:28.265 END TEST make 00:03:28.265 ************************************ 00:03:28.265 17:20:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.265 17:20:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.265 17:20:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.265 17:20:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.265 17:20:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.265 17:20:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.540 17:20:01 -- pm/common@44 -- $ pid=5076 00:03:28.540 17:20:01 -- pm/common@50 -- $ kill -TERM 5076 00:03:28.540 17:20:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.540 17:20:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.540 17:20:01 -- pm/common@44 -- $ pid=5077 00:03:28.540 17:20:01 -- pm/common@50 -- $ kill -TERM 5077 00:03:28.540 17:20:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:28.540 17:20:01 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:28.540 17:20:01 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:28.540 17:20:01 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:28.540 17:20:01 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:28.540 17:20:01 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:28.540 17:20:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.540 17:20:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.540 17:20:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.540 17:20:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.540 17:20:01 -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.540 17:20:01 -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.540 17:20:01 -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.540 17:20:01 -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.540 17:20:01 -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.540 17:20:01 -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.540 17:20:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.540 17:20:01 -- scripts/common.sh@344 -- # case "$op" in 00:03:28.540 17:20:01 -- scripts/common.sh@345 -- # : 1 00:03:28.540 17:20:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.540 17:20:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.540 17:20:01 -- scripts/common.sh@365 -- # decimal 1 00:03:28.540 17:20:01 -- scripts/common.sh@353 -- # local d=1 00:03:28.540 17:20:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.540 17:20:01 -- scripts/common.sh@355 -- # echo 1 00:03:28.540 17:20:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.540 17:20:01 -- scripts/common.sh@366 -- # decimal 2 00:03:28.540 17:20:01 -- scripts/common.sh@353 -- # local d=2 00:03:28.540 17:20:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.540 17:20:01 -- scripts/common.sh@355 -- # echo 2 00:03:28.540 17:20:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.540 17:20:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.540 17:20:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.540 17:20:01 -- scripts/common.sh@368 -- # return 0 00:03:28.540 17:20:01 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.540 17:20:01 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:28.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.540 --rc genhtml_branch_coverage=1 00:03:28.540 --rc genhtml_function_coverage=1 00:03:28.540 --rc genhtml_legend=1 00:03:28.540 --rc geninfo_all_blocks=1 00:03:28.540 --rc geninfo_unexecuted_blocks=1 00:03:28.540 00:03:28.540 ' 00:03:28.540 17:20:01 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:28.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.540 --rc genhtml_branch_coverage=1 00:03:28.540 --rc genhtml_function_coverage=1 00:03:28.540 --rc genhtml_legend=1 00:03:28.540 --rc geninfo_all_blocks=1 00:03:28.540 --rc geninfo_unexecuted_blocks=1 00:03:28.540 00:03:28.540 ' 00:03:28.540 17:20:01 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:28.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.540 --rc genhtml_branch_coverage=1 00:03:28.540 --rc genhtml_function_coverage=1 00:03:28.540 --rc genhtml_legend=1 00:03:28.540 --rc geninfo_all_blocks=1 00:03:28.540 --rc geninfo_unexecuted_blocks=1 00:03:28.540 00:03:28.540 ' 00:03:28.540 17:20:01 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:28.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.540 --rc genhtml_branch_coverage=1 00:03:28.540 --rc genhtml_function_coverage=1 00:03:28.540 --rc genhtml_legend=1 00:03:28.540 --rc geninfo_all_blocks=1 00:03:28.540 --rc geninfo_unexecuted_blocks=1 00:03:28.540 00:03:28.540 ' 00:03:28.540 17:20:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.540 17:20:01 -- nvmf/common.sh@7 -- # uname -s 00:03:28.540 17:20:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.540 17:20:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.540 17:20:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.540 17:20:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.540 17:20:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.540 17:20:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.540 17:20:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.540 17:20:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.540 17:20:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.540 17:20:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.540 17:20:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6a1993f-f9da-47c9-a274-8812dd505b00 00:03:28.540 17:20:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=b6a1993f-f9da-47c9-a274-8812dd505b00 00:03:28.540 17:20:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.540 17:20:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.540 17:20:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:28.540 17:20:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.540 17:20:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.540 17:20:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:28.540 17:20:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.540 17:20:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.540 17:20:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.540 17:20:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.541 17:20:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.541 17:20:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.541 17:20:01 -- paths/export.sh@5 -- # export PATH 00:03:28.541 17:20:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.541 17:20:01 -- nvmf/common.sh@51 -- # : 0 00:03:28.541 17:20:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:28.541 17:20:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:28.541 17:20:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.541 17:20:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.541 17:20:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.541 17:20:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:28.541 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:28.541 17:20:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:28.541 17:20:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:28.541 17:20:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:28.541 17:20:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.541 17:20:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.541 17:20:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.541 17:20:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.541 17:20:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.541 17:20:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.541 17:20:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.541 17:20:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.541 17:20:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.541 17:20:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.541 17:20:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54243 00:03:28.541 17:20:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.541 17:20:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.541 17:20:01 -- pm/common@17 -- # local monitor 00:03:28.541 17:20:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.541 17:20:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.541 17:20:01 -- pm/common@25 -- # sleep 1 00:03:28.541 17:20:01 -- pm/common@21 -- # date +%s 00:03:28.541 17:20:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733592001 00:03:28.541 17:20:01 -- pm/common@21 -- # date +%s 00:03:28.541 17:20:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733592001 00:03:28.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733592001_collect-vmstat.pm.log 00:03:28.541 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733592001_collect-cpu-load.pm.log 00:03:29.925 17:20:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.925 17:20:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.925 17:20:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.925 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:29.925 17:20:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.925 17:20:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:29.925 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:29.925 17:20:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:29.925 17:20:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:29.925 17:20:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:29.925 17:20:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:29.925 17:20:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:29.925 17:20:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.925 17:20:02 -- common/autotest_common.sh@1457 -- # uname 00:03:29.925 17:20:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:29.925 17:20:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.925 17:20:02 -- common/autotest_common.sh@1477 -- # uname 00:03:29.925 17:20:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:29.925 17:20:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:29.925 17:20:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:29.925 lcov: LCOV version 1.15 00:03:29.925 17:20:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:44.825 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.825 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.791 17:20:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:59.791 17:20:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.791 17:20:32 -- common/autotest_common.sh@10 -- # set +x 00:03:59.791 17:20:32 -- spdk/autotest.sh@78 -- # rm -f 00:03:59.791 17:20:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.624 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.624 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.624 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:00.624 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:00.624 17:20:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.624 17:20:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.624 17:20:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.624 17:20:33 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:00.624 17:20:33 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:00.624 17:20:33 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:00.624 17:20:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:00.624 17:20:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.624 17:20:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:00.624 17:20:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:00.624 17:20:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.624 17:20:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.624 17:20:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.624 17:20:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.624 17:20:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.624 17:20:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.625 17:20:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.625 No valid GPT data, bailing 00:04:00.625 17:20:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.625 17:20:33 -- scripts/common.sh@394 -- # pt= 00:04:00.625 17:20:33 -- scripts/common.sh@395 -- # return 1 00:04:00.625 17:20:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.625 1+0 records in 00:04:00.625 1+0 records out 00:04:00.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331683 s, 31.6 MB/s 00:04:00.625 17:20:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.625 17:20:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.625 17:20:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.625 17:20:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.625 17:20:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.886 No valid GPT data, bailing 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # pt= 00:04:00.886 17:20:34 -- scripts/common.sh@395 -- # return 1 00:04:00.886 17:20:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.886 1+0 records in 00:04:00.886 1+0 records out 00:04:00.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526251 s, 199 MB/s 00:04:00.886 17:20:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.886 17:20:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.886 17:20:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:00.886 17:20:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:00.886 17:20:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:00.886 No valid GPT data, bailing 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # pt= 00:04:00.886 17:20:34 -- scripts/common.sh@395 -- # return 1 00:04:00.886 17:20:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:00.886 1+0 records in 00:04:00.886 1+0 records out 00:04:00.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684507 s, 153 MB/s 00:04:00.886 17:20:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.886 17:20:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.886 17:20:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:00.886 17:20:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:00.886 17:20:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:00.886 No valid GPT data, bailing 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:00.886 17:20:34 -- scripts/common.sh@394 -- # pt= 00:04:00.886 17:20:34 -- scripts/common.sh@395 -- # return 1 00:04:00.886 17:20:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:01.147 1+0 records in 00:04:01.147 1+0 records out 00:04:01.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590429 s, 178 MB/s 00:04:01.147 17:20:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.147 17:20:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.147 17:20:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:01.147 17:20:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:01.147 17:20:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:01.147 No valid GPT data, bailing 00:04:01.147 17:20:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:01.147 17:20:34 -- scripts/common.sh@394 -- # pt= 00:04:01.147 17:20:34 -- scripts/common.sh@395 -- # return 1 00:04:01.147 17:20:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:01.147 1+0 records in 00:04:01.147 1+0 records out 00:04:01.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587131 s, 179 MB/s 00:04:01.147 17:20:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:01.147 17:20:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:01.147 17:20:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:01.147 17:20:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:01.147 17:20:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:01.147 No valid GPT data, bailing 00:04:01.147 17:20:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:01.147 17:20:34 -- scripts/common.sh@394 -- # pt= 00:04:01.147 17:20:34 -- scripts/common.sh@395 -- # return 1 00:04:01.147 17:20:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:01.147 1+0 records in 00:04:01.147 1+0 records out 00:04:01.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465654 s, 225 MB/s 00:04:01.147 17:20:34 -- spdk/autotest.sh@105 -- # sync 00:04:01.147 17:20:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.147 17:20:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.147 17:20:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:03.061 17:20:36 -- spdk/autotest.sh@111 -- # uname -s 00:04:03.061 17:20:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:03.061 17:20:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:03.061 17:20:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.894 Hugepages 00:04:03.894 node hugesize free / total 00:04:03.894 node0 1048576kB 0 / 0 00:04:03.894 node0 2048kB 0 / 0 00:04:03.894 00:04:03.894 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.894 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.894 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:04.156 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:04.156 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:04.156 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:04.156 17:20:37 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.156 17:20:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.156 17:20:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.156 17:20:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.312 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.312 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.312 17:20:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:06.699 17:20:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:06.699 17:20:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:06.699 17:20:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:06.699 17:20:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:06.699 17:20:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.699 17:20:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.699 17:20:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.699 17:20:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.699 17:20:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.699 17:20:39 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:06.699 17:20:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:06.699 17:20:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.959 Waiting for block devices as requested 00:04:06.959 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.959 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.221 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.221 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.504 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:12.504 17:20:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.504 17:20:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1543 -- # continue 00:04:12.504 17:20:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.504 17:20:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1543 -- # continue 00:04:12.504 17:20:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.504 17:20:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1543 -- # continue 00:04:12.504 17:20:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:12.504 17:20:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:12.504 17:20:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:12.504 17:20:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:12.504 17:20:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:12.505 17:20:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:12.505 17:20:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:12.505 17:20:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:12.505 17:20:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:12.505 17:20:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:12.505 17:20:45 -- common/autotest_common.sh@1543 -- # continue 00:04:12.505 17:20:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:12.505 17:20:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.505 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:04:12.505 17:20:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:12.505 17:20:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.505 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:04:12.505 17:20:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.329 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.329 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:13.329 17:20:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:13.329 17:20:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.329 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:04:13.590 17:20:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:13.590 17:20:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:13.590 17:20:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:13.590 17:20:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:13.590 17:20:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:13.590 17:20:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:13.590 17:20:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:13.590 17:20:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:13.590 17:20:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.590 17:20:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.590 17:20:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.590 17:20:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.590 17:20:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.590 17:20:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:13.590 17:20:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:13.590 17:20:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.590 17:20:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.590 17:20:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.590 17:20:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.590 17:20:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.590 17:20:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.590 17:20:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:13.590 17:20:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:13.590 17:20:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:13.590 17:20:46 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:13.590 17:20:46 -- common/autotest_common.sh@1572 -- # return 0 00:04:13.590 17:20:46 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:13.590 17:20:46 -- common/autotest_common.sh@1580 -- # return 0 00:04:13.590 17:20:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:13.590 17:20:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:13.590 17:20:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.590 17:20:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:13.590 17:20:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:13.590 17:20:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.590 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:04:13.590 17:20:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:13.590 17:20:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.590 17:20:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.590 17:20:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.590 17:20:46 -- common/autotest_common.sh@10 -- # set +x 00:04:13.590 ************************************ 00:04:13.590 START TEST env 00:04:13.590 ************************************ 00:04:13.590 17:20:46 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:13.590 * Looking for test storage... 00:04:13.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:13.591 17:20:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.591 17:20:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.591 17:20:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.591 17:20:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.591 17:20:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.591 17:20:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.591 17:20:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.591 17:20:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.591 17:20:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.591 17:20:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.591 17:20:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.591 17:20:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:13.591 17:20:46 env -- scripts/common.sh@345 -- # : 1 00:04:13.591 17:20:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.591 17:20:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.591 17:20:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:13.591 17:20:46 env -- scripts/common.sh@353 -- # local d=1 00:04:13.591 17:20:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.591 17:20:46 env -- scripts/common.sh@355 -- # echo 1 00:04:13.591 17:20:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.591 17:20:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:13.591 17:20:46 env -- scripts/common.sh@353 -- # local d=2 00:04:13.591 17:20:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.591 17:20:46 env -- scripts/common.sh@355 -- # echo 2 00:04:13.591 17:20:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.591 17:20:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.591 17:20:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.591 17:20:46 env -- scripts/common.sh@368 -- # return 0 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.591 --rc genhtml_branch_coverage=1 00:04:13.591 --rc genhtml_function_coverage=1 00:04:13.591 --rc genhtml_legend=1 00:04:13.591 --rc geninfo_all_blocks=1 00:04:13.591 --rc geninfo_unexecuted_blocks=1 00:04:13.591 00:04:13.591 ' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.591 --rc genhtml_branch_coverage=1 00:04:13.591 --rc genhtml_function_coverage=1 00:04:13.591 --rc genhtml_legend=1 00:04:13.591 --rc geninfo_all_blocks=1 00:04:13.591 --rc geninfo_unexecuted_blocks=1 00:04:13.591 00:04:13.591 ' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.591 --rc genhtml_branch_coverage=1 00:04:13.591 --rc genhtml_function_coverage=1 00:04:13.591 --rc genhtml_legend=1 00:04:13.591 --rc geninfo_all_blocks=1 00:04:13.591 --rc geninfo_unexecuted_blocks=1 00:04:13.591 00:04:13.591 ' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.591 --rc genhtml_branch_coverage=1 00:04:13.591 --rc genhtml_function_coverage=1 00:04:13.591 --rc genhtml_legend=1 00:04:13.591 --rc geninfo_all_blocks=1 00:04:13.591 --rc geninfo_unexecuted_blocks=1 00:04:13.591 00:04:13.591 ' 00:04:13.591 17:20:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.591 17:20:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.591 17:20:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.850 ************************************ 00:04:13.851 START TEST env_memory 00:04:13.851 ************************************ 00:04:13.851 17:20:46 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:13.851 00:04:13.851 00:04:13.851 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.851 http://cunit.sourceforge.net/ 00:04:13.851 00:04:13.851 00:04:13.851 Suite: memory 00:04:13.851 Test: alloc and free memory map ...[2024-12-07 17:20:47.024334] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.851 passed 00:04:13.851 Test: mem map translation ...[2024-12-07 17:20:47.063055] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.851 [2024-12-07 17:20:47.063097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.851 [2024-12-07 17:20:47.063155] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.851 [2024-12-07 17:20:47.063170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.851 passed 00:04:13.851 Test: mem map registration ...[2024-12-07 17:20:47.131126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:13.851 [2024-12-07 17:20:47.131178] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:13.851 passed 00:04:13.851 Test: mem map adjacent registrations ...passed 00:04:13.851 00:04:13.851 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.851 suites 1 1 n/a 0 0 00:04:13.851 tests 4 4 4 0 0 00:04:13.851 asserts 152 152 152 0 n/a 00:04:13.851 00:04:13.851 Elapsed time = 0.233 seconds 00:04:14.110 00:04:14.110 real 0m0.265s 00:04:14.110 user 0m0.234s 00:04:14.110 sys 0m0.023s 00:04:14.110 17:20:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.110 17:20:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:14.110 ************************************ 00:04:14.110 END TEST env_memory 00:04:14.110 ************************************ 00:04:14.110 17:20:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.110 17:20:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.110 17:20:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.110 17:20:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.110 ************************************ 00:04:14.110 START TEST env_vtophys 00:04:14.110 ************************************ 00:04:14.110 17:20:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:14.110 EAL: lib.eal log level changed from notice to debug 00:04:14.110 EAL: Detected lcore 0 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 1 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 2 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 3 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 4 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 5 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 6 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 7 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 8 as core 0 on socket 0 00:04:14.110 EAL: Detected lcore 9 as core 0 on socket 0 00:04:14.110 EAL: Maximum logical cores by configuration: 128 00:04:14.110 EAL: Detected CPU lcores: 10 00:04:14.110 EAL: Detected NUMA nodes: 1 00:04:14.110 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:14.110 EAL: Detected shared linkage of DPDK 00:04:14.110 EAL: No shared files mode enabled, IPC will be disabled 00:04:14.110 EAL: Selected IOVA mode 'PA' 00:04:14.110 EAL: Probing VFIO support... 00:04:14.110 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.110 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:14.110 EAL: Ask a virtual area of 0x2e000 bytes 00:04:14.110 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:14.110 EAL: Setting up physically contiguous memory... 00:04:14.110 EAL: Setting maximum number of open files to 524288 00:04:14.110 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:14.110 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:14.110 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.110 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:14.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.110 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.110 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:14.110 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:14.110 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.110 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:14.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.110 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.110 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:14.110 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:14.110 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.110 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:14.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.110 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.110 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:14.110 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:14.110 EAL: Ask a virtual area of 0x61000 bytes 00:04:14.110 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:14.110 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:14.110 EAL: Ask a virtual area of 0x400000000 bytes 00:04:14.110 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:14.110 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:14.110 EAL: Hugepages will be freed exactly as allocated. 00:04:14.110 EAL: No shared files mode enabled, IPC is disabled 00:04:14.111 EAL: No shared files mode enabled, IPC is disabled 00:04:14.111 EAL: TSC frequency is ~2600000 KHz 00:04:14.111 EAL: Main lcore 0 is ready (tid=7fc2702bfa40;cpuset=[0]) 00:04:14.111 EAL: Trying to obtain current memory policy. 00:04:14.111 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.111 EAL: Restoring previous memory policy: 0 00:04:14.111 EAL: request: mp_malloc_sync 00:04:14.111 EAL: No shared files mode enabled, IPC is disabled 00:04:14.111 EAL: Heap on socket 0 was expanded by 2MB 00:04:14.111 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:14.111 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:14.111 EAL: Mem event callback 'spdk:(nil)' registered 00:04:14.111 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:14.111 00:04:14.111 00:04:14.111 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.111 http://cunit.sourceforge.net/ 00:04:14.111 00:04:14.111 00:04:14.111 Suite: components_suite 00:04:14.369 Test: vtophys_malloc_test ...passed 00:04:14.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:14.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.369 EAL: Restoring previous memory policy: 4 00:04:14.369 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.369 EAL: request: mp_malloc_sync 00:04:14.369 EAL: No shared files mode enabled, IPC is disabled 00:04:14.369 EAL: Heap on socket 0 was expanded by 4MB 00:04:14.369 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.369 EAL: request: mp_malloc_sync 00:04:14.369 EAL: No shared files mode enabled, IPC is disabled 00:04:14.369 EAL: Heap on socket 0 was shrunk by 4MB 00:04:14.369 EAL: Trying to obtain current memory policy. 00:04:14.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.628 EAL: Restoring previous memory policy: 4 00:04:14.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.628 EAL: request: mp_malloc_sync 00:04:14.628 EAL: No shared files mode enabled, IPC is disabled 00:04:14.628 EAL: Heap on socket 0 was expanded by 6MB 00:04:14.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.628 EAL: request: mp_malloc_sync 00:04:14.628 EAL: No shared files mode enabled, IPC is disabled 00:04:14.628 EAL: Heap on socket 0 was shrunk by 6MB 00:04:14.628 EAL: Trying to obtain current memory policy. 00:04:14.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.628 EAL: Restoring previous memory policy: 4 00:04:14.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.628 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was expanded by 10MB 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was shrunk by 10MB 00:04:14.629 EAL: Trying to obtain current memory policy. 00:04:14.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.629 EAL: Restoring previous memory policy: 4 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was expanded by 18MB 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was shrunk by 18MB 00:04:14.629 EAL: Trying to obtain current memory policy. 00:04:14.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.629 EAL: Restoring previous memory policy: 4 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was expanded by 34MB 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was shrunk by 34MB 00:04:14.629 EAL: Trying to obtain current memory policy. 00:04:14.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.629 EAL: Restoring previous memory policy: 4 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was expanded by 66MB 00:04:14.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.629 EAL: request: mp_malloc_sync 00:04:14.629 EAL: No shared files mode enabled, IPC is disabled 00:04:14.629 EAL: Heap on socket 0 was shrunk by 66MB 00:04:14.888 EAL: Trying to obtain current memory policy. 00:04:14.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.888 EAL: Restoring previous memory policy: 4 00:04:14.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.888 EAL: request: mp_malloc_sync 00:04:14.888 EAL: No shared files mode enabled, IPC is disabled 00:04:14.888 EAL: Heap on socket 0 was expanded by 130MB 00:04:14.888 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.888 EAL: request: mp_malloc_sync 00:04:14.888 EAL: No shared files mode enabled, IPC is disabled 00:04:14.888 EAL: Heap on socket 0 was shrunk by 130MB 00:04:15.146 EAL: Trying to obtain current memory policy. 00:04:15.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.146 EAL: Restoring previous memory policy: 4 00:04:15.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.146 EAL: request: mp_malloc_sync 00:04:15.146 EAL: No shared files mode enabled, IPC is disabled 00:04:15.146 EAL: Heap on socket 0 was expanded by 258MB 00:04:15.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.404 EAL: request: mp_malloc_sync 00:04:15.404 EAL: No shared files mode enabled, IPC is disabled 00:04:15.404 EAL: Heap on socket 0 was shrunk by 258MB 00:04:15.663 EAL: Trying to obtain current memory policy. 00:04:15.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:15.663 EAL: Restoring previous memory policy: 4 00:04:15.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.663 EAL: request: mp_malloc_sync 00:04:15.663 EAL: No shared files mode enabled, IPC is disabled 00:04:15.663 EAL: Heap on socket 0 was expanded by 514MB 00:04:16.229 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.229 EAL: request: mp_malloc_sync 00:04:16.229 EAL: No shared files mode enabled, IPC is disabled 00:04:16.229 EAL: Heap on socket 0 was shrunk by 514MB 00:04:16.796 EAL: Trying to obtain current memory policy. 00:04:16.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.054 EAL: Restoring previous memory policy: 4 00:04:17.054 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.054 EAL: request: mp_malloc_sync 00:04:17.054 EAL: No shared files mode enabled, IPC is disabled 00:04:17.054 EAL: Heap on socket 0 was expanded by 1026MB 00:04:17.990 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.247 EAL: request: mp_malloc_sync 00:04:18.247 EAL: No shared files mode enabled, IPC is disabled 00:04:18.247 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:19.182 passed 00:04:19.182 00:04:19.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.182 suites 1 1 n/a 0 0 00:04:19.182 tests 2 2 2 0 0 00:04:19.182 asserts 5908 5908 5908 0 n/a 00:04:19.182 00:04:19.182 Elapsed time = 4.759 seconds 00:04:19.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.182 EAL: request: mp_malloc_sync 00:04:19.182 EAL: No shared files mode enabled, IPC is disabled 00:04:19.182 EAL: Heap on socket 0 was shrunk by 2MB 00:04:19.182 EAL: No shared files mode enabled, IPC is disabled 00:04:19.182 EAL: No shared files mode enabled, IPC is disabled 00:04:19.182 EAL: No shared files mode enabled, IPC is disabled 00:04:19.182 00:04:19.182 real 0m5.010s 00:04:19.182 user 0m4.258s 00:04:19.182 sys 0m0.608s 00:04:19.182 ************************************ 00:04:19.182 END TEST env_vtophys 00:04:19.182 ************************************ 00:04:19.182 17:20:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.182 17:20:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:19.182 17:20:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.182 17:20:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.182 17:20:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.182 17:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.182 ************************************ 00:04:19.182 START TEST env_pci 00:04:19.182 ************************************ 00:04:19.182 17:20:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:19.182 00:04:19.182 00:04:19.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.182 http://cunit.sourceforge.net/ 00:04:19.182 00:04:19.182 00:04:19.182 Suite: pci 00:04:19.182 Test: pci_hook ...[2024-12-07 17:20:52.341635] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57010 has claimed it 00:04:19.182 passed 00:04:19.182 00:04:19.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.182 suites 1 1 n/a 0 0 00:04:19.182 tests 1 1 1 0 0 00:04:19.182 asserts 25 25 25 0 n/a 00:04:19.182 00:04:19.182 Elapsed time = 0.006 seconds 00:04:19.182 EAL: Cannot find device (10000:00:01.0) 00:04:19.182 EAL: Failed to attach device on primary process 00:04:19.182 00:04:19.182 real 0m0.062s 00:04:19.182 user 0m0.027s 00:04:19.182 sys 0m0.034s 00:04:19.182 ************************************ 00:04:19.182 END TEST env_pci 00:04:19.182 ************************************ 00:04:19.182 17:20:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.182 17:20:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:19.182 17:20:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:19.182 17:20:52 env -- env/env.sh@15 -- # uname 00:04:19.182 17:20:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:19.182 17:20:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:19.182 17:20:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.182 17:20:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:19.182 17:20:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.182 17:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.182 ************************************ 00:04:19.182 START TEST env_dpdk_post_init 00:04:19.182 ************************************ 00:04:19.182 17:20:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:19.182 EAL: Detected CPU lcores: 10 00:04:19.182 EAL: Detected NUMA nodes: 1 00:04:19.182 EAL: Detected shared linkage of DPDK 00:04:19.182 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.182 EAL: Selected IOVA mode 'PA' 00:04:19.441 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.441 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:19.441 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:19.441 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:19.441 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:19.441 Starting DPDK initialization... 00:04:19.441 Starting SPDK post initialization... 00:04:19.441 SPDK NVMe probe 00:04:19.441 Attaching to 0000:00:10.0 00:04:19.441 Attaching to 0000:00:11.0 00:04:19.441 Attaching to 0000:00:12.0 00:04:19.441 Attaching to 0000:00:13.0 00:04:19.441 Attached to 0000:00:10.0 00:04:19.441 Attached to 0000:00:11.0 00:04:19.441 Attached to 0000:00:13.0 00:04:19.441 Attached to 0000:00:12.0 00:04:19.441 Cleaning up... 00:04:19.441 00:04:19.441 real 0m0.237s 00:04:19.441 user 0m0.066s 00:04:19.441 sys 0m0.074s 00:04:19.441 17:20:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.441 17:20:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.441 ************************************ 00:04:19.441 END TEST env_dpdk_post_init 00:04:19.441 ************************************ 00:04:19.441 17:20:52 env -- env/env.sh@26 -- # uname 00:04:19.441 17:20:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:19.441 17:20:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.441 17:20:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.441 17:20:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.441 17:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.441 ************************************ 00:04:19.441 START TEST env_mem_callbacks 00:04:19.441 ************************************ 00:04:19.441 17:20:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.441 EAL: Detected CPU lcores: 10 00:04:19.441 EAL: Detected NUMA nodes: 1 00:04:19.441 EAL: Detected shared linkage of DPDK 00:04:19.441 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.441 EAL: Selected IOVA mode 'PA' 00:04:19.700 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.700 00:04:19.700 00:04:19.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.700 http://cunit.sourceforge.net/ 00:04:19.700 00:04:19.700 00:04:19.700 Suite: memory 00:04:19.700 Test: test ... 00:04:19.700 register 0x200000200000 2097152 00:04:19.700 malloc 3145728 00:04:19.700 register 0x200000400000 4194304 00:04:19.700 buf 0x2000004fffc0 len 3145728 PASSED 00:04:19.700 malloc 64 00:04:19.700 buf 0x2000004ffec0 len 64 PASSED 00:04:19.700 malloc 4194304 00:04:19.700 register 0x200000800000 6291456 00:04:19.700 buf 0x2000009fffc0 len 4194304 PASSED 00:04:19.700 free 0x2000004fffc0 3145728 00:04:19.700 free 0x2000004ffec0 64 00:04:19.700 unregister 0x200000400000 4194304 PASSED 00:04:19.700 free 0x2000009fffc0 4194304 00:04:19.700 unregister 0x200000800000 6291456 PASSED 00:04:19.700 malloc 8388608 00:04:19.700 register 0x200000400000 10485760 00:04:19.700 buf 0x2000005fffc0 len 8388608 PASSED 00:04:19.700 free 0x2000005fffc0 8388608 00:04:19.700 unregister 0x200000400000 10485760 PASSED 00:04:19.700 passed 00:04:19.700 00:04:19.700 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.700 suites 1 1 n/a 0 0 00:04:19.700 tests 1 1 1 0 0 00:04:19.700 asserts 15 15 15 0 n/a 00:04:19.700 00:04:19.700 Elapsed time = 0.037 seconds 00:04:19.700 00:04:19.700 real 0m0.198s 00:04:19.700 user 0m0.062s 00:04:19.700 sys 0m0.034s 00:04:19.700 17:20:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.700 17:20:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:19.700 ************************************ 00:04:19.700 END TEST env_mem_callbacks 00:04:19.700 ************************************ 00:04:19.700 00:04:19.700 real 0m6.109s 00:04:19.700 user 0m4.799s 00:04:19.700 sys 0m0.956s 00:04:19.700 17:20:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.700 17:20:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.700 ************************************ 00:04:19.700 END TEST env 00:04:19.700 ************************************ 00:04:19.700 17:20:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.700 17:20:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.700 17:20:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.700 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:04:19.700 ************************************ 00:04:19.700 START TEST rpc 00:04:19.700 ************************************ 00:04:19.700 17:20:52 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:19.700 * Looking for test storage... 00:04:19.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.700 17:20:53 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.700 17:20:53 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.700 17:20:53 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.959 17:20:53 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.959 17:20:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.959 17:20:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.959 17:20:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.959 17:20:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.959 17:20:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.959 17:20:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.959 17:20:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.959 17:20:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.959 17:20:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.959 17:20:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.959 17:20:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.959 17:20:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.959 17:20:53 rpc -- scripts/common.sh@345 -- # : 1 00:04:19.960 17:20:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.960 17:20:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.960 17:20:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.960 17:20:53 rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.960 17:20:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.960 17:20:53 rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.960 17:20:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.960 17:20:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.960 17:20:53 rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.960 17:20:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.960 17:20:53 rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.960 17:20:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.960 17:20:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.960 17:20:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.960 17:20:53 rpc -- scripts/common.sh@368 -- # return 0 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.960 --rc genhtml_branch_coverage=1 00:04:19.960 --rc genhtml_function_coverage=1 00:04:19.960 --rc genhtml_legend=1 00:04:19.960 --rc geninfo_all_blocks=1 00:04:19.960 --rc geninfo_unexecuted_blocks=1 00:04:19.960 00:04:19.960 ' 00:04:19.960 17:20:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57137 00:04:19.960 17:20:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.960 17:20:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57137 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 57137 ']' 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.960 17:20:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:19.960 17:20:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.960 [2024-12-07 17:20:53.179338] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:19.960 [2024-12-07 17:20:53.179459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:04:20.218 [2024-12-07 17:20:53.340478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.218 [2024-12-07 17:20:53.436783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:20.218 [2024-12-07 17:20:53.436831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57137' to capture a snapshot of events at runtime. 00:04:20.218 [2024-12-07 17:20:53.436841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:20.218 [2024-12-07 17:20:53.436851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:20.218 [2024-12-07 17:20:53.436858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57137 for offline analysis/debug. 00:04:20.218 [2024-12-07 17:20:53.437692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.782 17:20:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.782 17:20:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:20.782 17:20:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.782 17:20:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.782 17:20:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.782 17:20:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.782 17:20:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.782 17:20:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.782 17:20:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.782 ************************************ 00:04:20.782 START TEST rpc_integrity 00:04:20.782 ************************************ 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.782 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.782 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.783 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.783 { 00:04:20.783 "name": "Malloc0", 00:04:20.783 "aliases": [ 00:04:20.783 "29a07191-6822-4a3c-9cf3-e0114e5c1e2c" 00:04:20.783 ], 00:04:20.783 "product_name": "Malloc disk", 00:04:20.783 "block_size": 512, 00:04:20.783 "num_blocks": 16384, 00:04:20.783 "uuid": "29a07191-6822-4a3c-9cf3-e0114e5c1e2c", 00:04:20.783 "assigned_rate_limits": { 00:04:20.783 "rw_ios_per_sec": 0, 00:04:20.783 "rw_mbytes_per_sec": 0, 00:04:20.783 "r_mbytes_per_sec": 0, 00:04:20.783 "w_mbytes_per_sec": 0 00:04:20.783 }, 00:04:20.783 "claimed": false, 00:04:20.783 "zoned": false, 00:04:20.783 "supported_io_types": { 00:04:20.783 "read": true, 00:04:20.783 "write": true, 00:04:20.783 "unmap": true, 00:04:20.783 "flush": true, 00:04:20.783 "reset": true, 00:04:20.783 "nvme_admin": false, 00:04:20.783 "nvme_io": false, 00:04:20.783 "nvme_io_md": false, 00:04:20.783 "write_zeroes": true, 00:04:20.783 "zcopy": true, 00:04:20.783 "get_zone_info": false, 00:04:20.783 "zone_management": false, 00:04:20.783 "zone_append": false, 00:04:20.783 "compare": false, 00:04:20.783 "compare_and_write": false, 00:04:20.783 "abort": true, 00:04:20.783 "seek_hole": false, 00:04:20.783 "seek_data": false, 00:04:20.783 "copy": true, 00:04:20.783 "nvme_iov_md": false 00:04:20.783 }, 00:04:20.783 "memory_domains": [ 00:04:20.783 { 00:04:20.783 "dma_device_id": "system", 00:04:20.783 "dma_device_type": 1 00:04:20.783 }, 00:04:20.783 { 00:04:20.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.783 "dma_device_type": 2 00:04:20.783 } 00:04:20.783 ], 00:04:20.783 "driver_specific": {} 00:04:20.783 } 00:04:20.783 ]' 00:04:20.783 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.783 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.783 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.783 [2024-12-07 17:20:54.135180] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.783 [2024-12-07 17:20:54.135234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.783 [2024-12-07 17:20:54.135259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:20.783 [2024-12-07 17:20:54.135270] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.783 [2024-12-07 17:20:54.137467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.783 [2024-12-07 17:20:54.137507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.783 Passthru0 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.783 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.783 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.041 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.041 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.041 { 00:04:21.041 "name": "Malloc0", 00:04:21.041 "aliases": [ 00:04:21.041 "29a07191-6822-4a3c-9cf3-e0114e5c1e2c" 00:04:21.041 ], 00:04:21.041 "product_name": "Malloc disk", 00:04:21.041 "block_size": 512, 00:04:21.041 "num_blocks": 16384, 00:04:21.041 "uuid": "29a07191-6822-4a3c-9cf3-e0114e5c1e2c", 00:04:21.041 "assigned_rate_limits": { 00:04:21.041 "rw_ios_per_sec": 0, 00:04:21.041 "rw_mbytes_per_sec": 0, 00:04:21.041 "r_mbytes_per_sec": 0, 00:04:21.041 "w_mbytes_per_sec": 0 00:04:21.041 }, 00:04:21.041 "claimed": true, 00:04:21.041 "claim_type": "exclusive_write", 00:04:21.041 "zoned": false, 00:04:21.041 "supported_io_types": { 00:04:21.041 "read": true, 00:04:21.041 "write": true, 00:04:21.041 "unmap": true, 00:04:21.041 "flush": true, 00:04:21.041 "reset": true, 00:04:21.041 "nvme_admin": false, 00:04:21.041 "nvme_io": false, 00:04:21.041 "nvme_io_md": false, 00:04:21.041 "write_zeroes": true, 00:04:21.041 "zcopy": true, 00:04:21.041 "get_zone_info": false, 00:04:21.041 "zone_management": false, 00:04:21.041 "zone_append": false, 00:04:21.041 "compare": false, 00:04:21.041 "compare_and_write": false, 00:04:21.041 "abort": true, 00:04:21.041 "seek_hole": false, 00:04:21.041 "seek_data": false, 00:04:21.041 "copy": true, 00:04:21.041 "nvme_iov_md": false 00:04:21.041 }, 00:04:21.041 "memory_domains": [ 00:04:21.041 { 00:04:21.041 "dma_device_id": "system", 00:04:21.041 "dma_device_type": 1 00:04:21.041 }, 00:04:21.041 { 00:04:21.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.041 "dma_device_type": 2 00:04:21.041 } 00:04:21.041 ], 00:04:21.041 "driver_specific": {} 00:04:21.041 }, 00:04:21.041 { 00:04:21.041 "name": "Passthru0", 00:04:21.041 "aliases": [ 00:04:21.041 "6b5f4ae3-a31c-58a3-8a5d-c64086f66e4d" 00:04:21.041 ], 00:04:21.041 "product_name": "passthru", 00:04:21.041 "block_size": 512, 00:04:21.041 "num_blocks": 16384, 00:04:21.041 "uuid": "6b5f4ae3-a31c-58a3-8a5d-c64086f66e4d", 00:04:21.041 "assigned_rate_limits": { 00:04:21.041 "rw_ios_per_sec": 0, 00:04:21.041 "rw_mbytes_per_sec": 0, 00:04:21.041 "r_mbytes_per_sec": 0, 00:04:21.041 "w_mbytes_per_sec": 0 00:04:21.041 }, 00:04:21.041 "claimed": false, 00:04:21.041 "zoned": false, 00:04:21.041 "supported_io_types": { 00:04:21.041 "read": true, 00:04:21.041 "write": true, 00:04:21.041 "unmap": true, 00:04:21.041 "flush": true, 00:04:21.041 "reset": true, 00:04:21.041 "nvme_admin": false, 00:04:21.041 "nvme_io": false, 00:04:21.041 "nvme_io_md": false, 00:04:21.041 "write_zeroes": true, 00:04:21.041 "zcopy": true, 00:04:21.041 "get_zone_info": false, 00:04:21.041 "zone_management": false, 00:04:21.041 "zone_append": false, 00:04:21.041 "compare": false, 00:04:21.041 "compare_and_write": false, 00:04:21.041 "abort": true, 00:04:21.041 "seek_hole": false, 00:04:21.041 "seek_data": false, 00:04:21.041 "copy": true, 00:04:21.041 "nvme_iov_md": false 00:04:21.041 }, 00:04:21.041 "memory_domains": [ 00:04:21.041 { 00:04:21.041 "dma_device_id": "system", 00:04:21.041 "dma_device_type": 1 00:04:21.041 }, 00:04:21.041 { 00:04:21.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.041 "dma_device_type": 2 00:04:21.041 } 00:04:21.041 ], 00:04:21.041 "driver_specific": { 00:04:21.041 "passthru": { 00:04:21.041 "name": "Passthru0", 00:04:21.041 "base_bdev_name": "Malloc0" 00:04:21.041 } 00:04:21.041 } 00:04:21.041 } 00:04:21.041 ]' 00:04:21.041 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.041 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.041 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.042 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.042 17:20:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.042 00:04:21.042 real 0m0.237s 00:04:21.042 user 0m0.128s 00:04:21.042 sys 0m0.027s 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.042 ************************************ 00:04:21.042 END TEST rpc_integrity 00:04:21.042 ************************************ 00:04:21.042 17:20:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:21.042 17:20:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.042 17:20:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.042 17:20:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 ************************************ 00:04:21.042 START TEST rpc_plugins 00:04:21.042 ************************************ 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.042 { 00:04:21.042 "name": "Malloc1", 00:04:21.042 "aliases": [ 00:04:21.042 "440f9922-910d-46bd-a69b-1f719ccad3ec" 00:04:21.042 ], 00:04:21.042 "product_name": "Malloc disk", 00:04:21.042 "block_size": 4096, 00:04:21.042 "num_blocks": 256, 00:04:21.042 "uuid": "440f9922-910d-46bd-a69b-1f719ccad3ec", 00:04:21.042 "assigned_rate_limits": { 00:04:21.042 "rw_ios_per_sec": 0, 00:04:21.042 "rw_mbytes_per_sec": 0, 00:04:21.042 "r_mbytes_per_sec": 0, 00:04:21.042 "w_mbytes_per_sec": 0 00:04:21.042 }, 00:04:21.042 "claimed": false, 00:04:21.042 "zoned": false, 00:04:21.042 "supported_io_types": { 00:04:21.042 "read": true, 00:04:21.042 "write": true, 00:04:21.042 "unmap": true, 00:04:21.042 "flush": true, 00:04:21.042 "reset": true, 00:04:21.042 "nvme_admin": false, 00:04:21.042 "nvme_io": false, 00:04:21.042 "nvme_io_md": false, 00:04:21.042 "write_zeroes": true, 00:04:21.042 "zcopy": true, 00:04:21.042 "get_zone_info": false, 00:04:21.042 "zone_management": false, 00:04:21.042 "zone_append": false, 00:04:21.042 "compare": false, 00:04:21.042 "compare_and_write": false, 00:04:21.042 "abort": true, 00:04:21.042 "seek_hole": false, 00:04:21.042 "seek_data": false, 00:04:21.042 "copy": true, 00:04:21.042 "nvme_iov_md": false 00:04:21.042 }, 00:04:21.042 "memory_domains": [ 00:04:21.042 { 00:04:21.042 "dma_device_id": "system", 00:04:21.042 "dma_device_type": 1 00:04:21.042 }, 00:04:21.042 { 00:04:21.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.042 "dma_device_type": 2 00:04:21.042 } 00:04:21.042 ], 00:04:21.042 "driver_specific": {} 00:04:21.042 } 00:04:21.042 ]' 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.042 17:20:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.042 00:04:21.042 real 0m0.108s 00:04:21.042 user 0m0.054s 00:04:21.042 sys 0m0.019s 00:04:21.042 ************************************ 00:04:21.042 END TEST rpc_plugins 00:04:21.042 ************************************ 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.042 17:20:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.301 17:20:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.301 17:20:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.301 17:20:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.301 17:20:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.301 ************************************ 00:04:21.301 START TEST rpc_trace_cmd_test 00:04:21.301 ************************************ 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.301 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57137", 00:04:21.301 "tpoint_group_mask": "0x8", 00:04:21.301 "iscsi_conn": { 00:04:21.301 "mask": "0x2", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "scsi": { 00:04:21.301 "mask": "0x4", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "bdev": { 00:04:21.301 "mask": "0x8", 00:04:21.301 "tpoint_mask": "0xffffffffffffffff" 00:04:21.301 }, 00:04:21.301 "nvmf_rdma": { 00:04:21.301 "mask": "0x10", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "nvmf_tcp": { 00:04:21.301 "mask": "0x20", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "ftl": { 00:04:21.301 "mask": "0x40", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "blobfs": { 00:04:21.301 "mask": "0x80", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "dsa": { 00:04:21.301 "mask": "0x200", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "thread": { 00:04:21.301 "mask": "0x400", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "nvme_pcie": { 00:04:21.301 "mask": "0x800", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "iaa": { 00:04:21.301 "mask": "0x1000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "nvme_tcp": { 00:04:21.301 "mask": "0x2000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "bdev_nvme": { 00:04:21.301 "mask": "0x4000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "sock": { 00:04:21.301 "mask": "0x8000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "blob": { 00:04:21.301 "mask": "0x10000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "bdev_raid": { 00:04:21.301 "mask": "0x20000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 }, 00:04:21.301 "scheduler": { 00:04:21.301 "mask": "0x40000", 00:04:21.301 "tpoint_mask": "0x0" 00:04:21.301 } 00:04:21.301 }' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.301 00:04:21.301 real 0m0.176s 00:04:21.301 user 0m0.147s 00:04:21.301 sys 0m0.018s 00:04:21.301 ************************************ 00:04:21.301 END TEST rpc_trace_cmd_test 00:04:21.301 ************************************ 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.301 17:20:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.302 17:20:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.302 17:20:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.302 17:20:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.302 17:20:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.302 17:20:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.302 17:20:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.302 ************************************ 00:04:21.302 START TEST rpc_daemon_integrity 00:04:21.302 ************************************ 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.302 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.562 { 00:04:21.562 "name": "Malloc2", 00:04:21.562 "aliases": [ 00:04:21.562 "40bec905-ac37-492e-807b-e5a0900a0284" 00:04:21.562 ], 00:04:21.562 "product_name": "Malloc disk", 00:04:21.562 "block_size": 512, 00:04:21.562 "num_blocks": 16384, 00:04:21.562 "uuid": "40bec905-ac37-492e-807b-e5a0900a0284", 00:04:21.562 "assigned_rate_limits": { 00:04:21.562 "rw_ios_per_sec": 0, 00:04:21.562 "rw_mbytes_per_sec": 0, 00:04:21.562 "r_mbytes_per_sec": 0, 00:04:21.562 "w_mbytes_per_sec": 0 00:04:21.562 }, 00:04:21.562 "claimed": false, 00:04:21.562 "zoned": false, 00:04:21.562 "supported_io_types": { 00:04:21.562 "read": true, 00:04:21.562 "write": true, 00:04:21.562 "unmap": true, 00:04:21.562 "flush": true, 00:04:21.562 "reset": true, 00:04:21.562 "nvme_admin": false, 00:04:21.562 "nvme_io": false, 00:04:21.562 "nvme_io_md": false, 00:04:21.562 "write_zeroes": true, 00:04:21.562 "zcopy": true, 00:04:21.562 "get_zone_info": false, 00:04:21.562 "zone_management": false, 00:04:21.562 "zone_append": false, 00:04:21.562 "compare": false, 00:04:21.562 "compare_and_write": false, 00:04:21.562 "abort": true, 00:04:21.562 "seek_hole": false, 00:04:21.562 "seek_data": false, 00:04:21.562 "copy": true, 00:04:21.562 "nvme_iov_md": false 00:04:21.562 }, 00:04:21.562 "memory_domains": [ 00:04:21.562 { 00:04:21.562 "dma_device_id": "system", 00:04:21.562 "dma_device_type": 1 00:04:21.562 }, 00:04:21.562 { 00:04:21.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.562 "dma_device_type": 2 00:04:21.562 } 00:04:21.562 ], 00:04:21.562 "driver_specific": {} 00:04:21.562 } 00:04:21.562 ]' 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.562 [2024-12-07 17:20:54.769744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.562 [2024-12-07 17:20:54.769800] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.562 [2024-12-07 17:20:54.769820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:21.562 [2024-12-07 17:20:54.769831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.562 [2024-12-07 17:20:54.771951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.562 [2024-12-07 17:20:54.772018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.562 Passthru0 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.562 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.562 { 00:04:21.562 "name": "Malloc2", 00:04:21.562 "aliases": [ 00:04:21.562 "40bec905-ac37-492e-807b-e5a0900a0284" 00:04:21.562 ], 00:04:21.562 "product_name": "Malloc disk", 00:04:21.562 "block_size": 512, 00:04:21.562 "num_blocks": 16384, 00:04:21.562 "uuid": "40bec905-ac37-492e-807b-e5a0900a0284", 00:04:21.562 "assigned_rate_limits": { 00:04:21.562 "rw_ios_per_sec": 0, 00:04:21.562 "rw_mbytes_per_sec": 0, 00:04:21.562 "r_mbytes_per_sec": 0, 00:04:21.562 "w_mbytes_per_sec": 0 00:04:21.562 }, 00:04:21.562 "claimed": true, 00:04:21.563 "claim_type": "exclusive_write", 00:04:21.563 "zoned": false, 00:04:21.563 "supported_io_types": { 00:04:21.563 "read": true, 00:04:21.563 "write": true, 00:04:21.563 "unmap": true, 00:04:21.563 "flush": true, 00:04:21.563 "reset": true, 00:04:21.563 "nvme_admin": false, 00:04:21.563 "nvme_io": false, 00:04:21.563 "nvme_io_md": false, 00:04:21.563 "write_zeroes": true, 00:04:21.563 "zcopy": true, 00:04:21.563 "get_zone_info": false, 00:04:21.563 "zone_management": false, 00:04:21.563 "zone_append": false, 00:04:21.563 "compare": false, 00:04:21.563 "compare_and_write": false, 00:04:21.563 "abort": true, 00:04:21.563 "seek_hole": false, 00:04:21.563 "seek_data": false, 00:04:21.563 "copy": true, 00:04:21.563 "nvme_iov_md": false 00:04:21.563 }, 00:04:21.563 "memory_domains": [ 00:04:21.563 { 00:04:21.563 "dma_device_id": "system", 00:04:21.563 "dma_device_type": 1 00:04:21.563 }, 00:04:21.563 { 00:04:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.563 "dma_device_type": 2 00:04:21.563 } 00:04:21.563 ], 00:04:21.563 "driver_specific": {} 00:04:21.563 }, 00:04:21.563 { 00:04:21.563 "name": "Passthru0", 00:04:21.563 "aliases": [ 00:04:21.563 "45e8074e-7d0e-5882-8d24-3811cef66ab6" 00:04:21.563 ], 00:04:21.563 "product_name": "passthru", 00:04:21.563 "block_size": 512, 00:04:21.563 "num_blocks": 16384, 00:04:21.563 "uuid": "45e8074e-7d0e-5882-8d24-3811cef66ab6", 00:04:21.563 "assigned_rate_limits": { 00:04:21.563 "rw_ios_per_sec": 0, 00:04:21.563 "rw_mbytes_per_sec": 0, 00:04:21.563 "r_mbytes_per_sec": 0, 00:04:21.563 "w_mbytes_per_sec": 0 00:04:21.563 }, 00:04:21.563 "claimed": false, 00:04:21.563 "zoned": false, 00:04:21.563 "supported_io_types": { 00:04:21.563 "read": true, 00:04:21.563 "write": true, 00:04:21.563 "unmap": true, 00:04:21.563 "flush": true, 00:04:21.563 "reset": true, 00:04:21.563 "nvme_admin": false, 00:04:21.563 "nvme_io": false, 00:04:21.563 "nvme_io_md": false, 00:04:21.563 "write_zeroes": true, 00:04:21.563 "zcopy": true, 00:04:21.563 "get_zone_info": false, 00:04:21.563 "zone_management": false, 00:04:21.563 "zone_append": false, 00:04:21.563 "compare": false, 00:04:21.563 "compare_and_write": false, 00:04:21.563 "abort": true, 00:04:21.563 "seek_hole": false, 00:04:21.563 "seek_data": false, 00:04:21.563 "copy": true, 00:04:21.563 "nvme_iov_md": false 00:04:21.563 }, 00:04:21.563 "memory_domains": [ 00:04:21.563 { 00:04:21.563 "dma_device_id": "system", 00:04:21.563 "dma_device_type": 1 00:04:21.563 }, 00:04:21.563 { 00:04:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.563 "dma_device_type": 2 00:04:21.563 } 00:04:21.563 ], 00:04:21.563 "driver_specific": { 00:04:21.563 "passthru": { 00:04:21.563 "name": "Passthru0", 00:04:21.563 "base_bdev_name": "Malloc2" 00:04:21.563 } 00:04:21.563 } 00:04:21.563 } 00:04:21.563 ]' 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.563 00:04:21.563 real 0m0.246s 00:04:21.563 user 0m0.127s 00:04:21.563 sys 0m0.036s 00:04:21.563 ************************************ 00:04:21.563 END TEST rpc_daemon_integrity 00:04:21.563 ************************************ 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.563 17:20:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.563 17:20:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.563 17:20:54 rpc -- rpc/rpc.sh@84 -- # killprocess 57137 00:04:21.563 17:20:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 57137 ']' 00:04:21.563 17:20:54 rpc -- common/autotest_common.sh@958 -- # kill -0 57137 00:04:21.563 17:20:54 rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.563 17:20:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57137 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.867 killing process with pid 57137 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57137' 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@973 -- # kill 57137 00:04:21.867 17:20:54 rpc -- common/autotest_common.sh@978 -- # wait 57137 00:04:23.243 00:04:23.243 real 0m3.271s 00:04:23.243 user 0m3.718s 00:04:23.243 sys 0m0.558s 00:04:23.243 17:20:56 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.243 ************************************ 00:04:23.243 END TEST rpc 00:04:23.243 ************************************ 00:04:23.243 17:20:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.243 17:20:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.243 17:20:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.243 17:20:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.243 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.243 ************************************ 00:04:23.243 START TEST skip_rpc 00:04:23.243 ************************************ 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.243 * Looking for test storage... 00:04:23.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.243 17:20:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.243 --rc genhtml_branch_coverage=1 00:04:23.243 --rc genhtml_function_coverage=1 00:04:23.243 --rc genhtml_legend=1 00:04:23.243 --rc geninfo_all_blocks=1 00:04:23.243 --rc geninfo_unexecuted_blocks=1 00:04:23.243 00:04:23.243 ' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.243 --rc genhtml_branch_coverage=1 00:04:23.243 --rc genhtml_function_coverage=1 00:04:23.243 --rc genhtml_legend=1 00:04:23.243 --rc geninfo_all_blocks=1 00:04:23.243 --rc geninfo_unexecuted_blocks=1 00:04:23.243 00:04:23.243 ' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.243 --rc genhtml_branch_coverage=1 00:04:23.243 --rc genhtml_function_coverage=1 00:04:23.243 --rc genhtml_legend=1 00:04:23.243 --rc geninfo_all_blocks=1 00:04:23.243 --rc geninfo_unexecuted_blocks=1 00:04:23.243 00:04:23.243 ' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.243 --rc genhtml_branch_coverage=1 00:04:23.243 --rc genhtml_function_coverage=1 00:04:23.243 --rc genhtml_legend=1 00:04:23.243 --rc geninfo_all_blocks=1 00:04:23.243 --rc geninfo_unexecuted_blocks=1 00:04:23.243 00:04:23.243 ' 00:04:23.243 17:20:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.243 17:20:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.243 17:20:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.243 17:20:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.243 ************************************ 00:04:23.243 START TEST skip_rpc 00:04:23.243 ************************************ 00:04:23.243 17:20:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.243 17:20:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57344 00:04:23.243 17:20:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.243 17:20:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.243 17:20:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.243 [2024-12-07 17:20:56.537538] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:23.243 [2024-12-07 17:20:56.537676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57344 ] 00:04:23.502 [2024-12-07 17:20:56.698978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.502 [2024-12-07 17:20:56.781341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57344 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57344 ']' 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57344 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57344 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.763 killing process with pid 57344 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57344' 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57344 00:04:28.763 17:21:01 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57344 00:04:29.327 00:04:29.327 real 0m6.208s 00:04:29.327 user 0m5.838s 00:04:29.327 sys 0m0.271s 00:04:29.327 ************************************ 00:04:29.327 END TEST skip_rpc 00:04:29.327 ************************************ 00:04:29.327 17:21:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.327 17:21:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.327 17:21:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:29.327 17:21:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.327 17:21:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.327 17:21:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.327 ************************************ 00:04:29.327 START TEST skip_rpc_with_json 00:04:29.327 ************************************ 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57441 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57441 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57441 ']' 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.327 17:21:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.584 [2024-12-07 17:21:02.769472] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:29.584 [2024-12-07 17:21:02.769589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57441 ] 00:04:29.584 [2024-12-07 17:21:02.929352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.841 [2024-12-07 17:21:03.028601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.409 [2024-12-07 17:21:03.613428] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.409 request: 00:04:30.409 { 00:04:30.409 "trtype": "tcp", 00:04:30.409 "method": "nvmf_get_transports", 00:04:30.409 "req_id": 1 00:04:30.409 } 00:04:30.409 Got JSON-RPC error response 00:04:30.409 response: 00:04:30.409 { 00:04:30.409 "code": -19, 00:04:30.409 "message": "No such device" 00:04:30.409 } 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.409 [2024-12-07 17:21:03.625543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.409 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.409 { 00:04:30.409 "subsystems": [ 00:04:30.409 { 00:04:30.409 "subsystem": "fsdev", 00:04:30.409 "config": [ 00:04:30.409 { 00:04:30.409 "method": "fsdev_set_opts", 00:04:30.409 "params": { 00:04:30.409 "fsdev_io_pool_size": 65535, 00:04:30.409 "fsdev_io_cache_size": 256 00:04:30.409 } 00:04:30.409 } 00:04:30.409 ] 00:04:30.409 }, 00:04:30.409 { 00:04:30.409 "subsystem": "keyring", 00:04:30.409 "config": [] 00:04:30.409 }, 00:04:30.409 { 00:04:30.409 "subsystem": "iobuf", 00:04:30.409 "config": [ 00:04:30.409 { 00:04:30.409 "method": "iobuf_set_options", 00:04:30.409 "params": { 00:04:30.409 "small_pool_count": 8192, 00:04:30.409 "large_pool_count": 1024, 00:04:30.409 "small_bufsize": 8192, 00:04:30.409 "large_bufsize": 135168, 00:04:30.409 "enable_numa": false 00:04:30.409 } 00:04:30.409 } 00:04:30.409 ] 00:04:30.409 }, 00:04:30.409 { 00:04:30.409 "subsystem": "sock", 00:04:30.409 "config": [ 00:04:30.409 { 00:04:30.409 "method": "sock_set_default_impl", 00:04:30.410 "params": { 00:04:30.410 "impl_name": "posix" 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "sock_impl_set_options", 00:04:30.410 "params": { 00:04:30.410 "impl_name": "ssl", 00:04:30.410 "recv_buf_size": 4096, 00:04:30.410 "send_buf_size": 4096, 00:04:30.410 "enable_recv_pipe": true, 00:04:30.410 "enable_quickack": false, 00:04:30.410 "enable_placement_id": 0, 00:04:30.410 "enable_zerocopy_send_server": true, 00:04:30.410 "enable_zerocopy_send_client": false, 00:04:30.410 "zerocopy_threshold": 0, 00:04:30.410 "tls_version": 0, 00:04:30.410 "enable_ktls": false 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "sock_impl_set_options", 00:04:30.410 "params": { 00:04:30.410 "impl_name": "posix", 00:04:30.410 "recv_buf_size": 2097152, 00:04:30.410 "send_buf_size": 2097152, 00:04:30.410 "enable_recv_pipe": true, 00:04:30.410 "enable_quickack": false, 00:04:30.410 "enable_placement_id": 0, 00:04:30.410 "enable_zerocopy_send_server": true, 00:04:30.410 "enable_zerocopy_send_client": false, 00:04:30.410 "zerocopy_threshold": 0, 00:04:30.410 "tls_version": 0, 00:04:30.410 "enable_ktls": false 00:04:30.410 } 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "vmd", 00:04:30.410 "config": [] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "accel", 00:04:30.410 "config": [ 00:04:30.410 { 00:04:30.410 "method": "accel_set_options", 00:04:30.410 "params": { 00:04:30.410 "small_cache_size": 128, 00:04:30.410 "large_cache_size": 16, 00:04:30.410 "task_count": 2048, 00:04:30.410 "sequence_count": 2048, 00:04:30.410 "buf_count": 2048 00:04:30.410 } 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "bdev", 00:04:30.410 "config": [ 00:04:30.410 { 00:04:30.410 "method": "bdev_set_options", 00:04:30.410 "params": { 00:04:30.410 "bdev_io_pool_size": 65535, 00:04:30.410 "bdev_io_cache_size": 256, 00:04:30.410 "bdev_auto_examine": true, 00:04:30.410 "iobuf_small_cache_size": 128, 00:04:30.410 "iobuf_large_cache_size": 16 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "bdev_raid_set_options", 00:04:30.410 "params": { 00:04:30.410 "process_window_size_kb": 1024, 00:04:30.410 "process_max_bandwidth_mb_sec": 0 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "bdev_iscsi_set_options", 00:04:30.410 "params": { 00:04:30.410 "timeout_sec": 30 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "bdev_nvme_set_options", 00:04:30.410 "params": { 00:04:30.410 "action_on_timeout": "none", 00:04:30.410 "timeout_us": 0, 00:04:30.410 "timeout_admin_us": 0, 00:04:30.410 "keep_alive_timeout_ms": 10000, 00:04:30.410 "arbitration_burst": 0, 00:04:30.410 "low_priority_weight": 0, 00:04:30.410 "medium_priority_weight": 0, 00:04:30.410 "high_priority_weight": 0, 00:04:30.410 "nvme_adminq_poll_period_us": 10000, 00:04:30.410 "nvme_ioq_poll_period_us": 0, 00:04:30.410 "io_queue_requests": 0, 00:04:30.410 "delay_cmd_submit": true, 00:04:30.410 "transport_retry_count": 4, 00:04:30.410 "bdev_retry_count": 3, 00:04:30.410 "transport_ack_timeout": 0, 00:04:30.410 "ctrlr_loss_timeout_sec": 0, 00:04:30.410 "reconnect_delay_sec": 0, 00:04:30.410 "fast_io_fail_timeout_sec": 0, 00:04:30.410 "disable_auto_failback": false, 00:04:30.410 "generate_uuids": false, 00:04:30.410 "transport_tos": 0, 00:04:30.410 "nvme_error_stat": false, 00:04:30.410 "rdma_srq_size": 0, 00:04:30.410 "io_path_stat": false, 00:04:30.410 "allow_accel_sequence": false, 00:04:30.410 "rdma_max_cq_size": 0, 00:04:30.410 "rdma_cm_event_timeout_ms": 0, 00:04:30.410 "dhchap_digests": [ 00:04:30.410 "sha256", 00:04:30.410 "sha384", 00:04:30.410 "sha512" 00:04:30.410 ], 00:04:30.410 "dhchap_dhgroups": [ 00:04:30.410 "null", 00:04:30.410 "ffdhe2048", 00:04:30.410 "ffdhe3072", 00:04:30.410 "ffdhe4096", 00:04:30.410 "ffdhe6144", 00:04:30.410 "ffdhe8192" 00:04:30.410 ] 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "bdev_nvme_set_hotplug", 00:04:30.410 "params": { 00:04:30.410 "period_us": 100000, 00:04:30.410 "enable": false 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "bdev_wait_for_examine" 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "scsi", 00:04:30.410 "config": null 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "scheduler", 00:04:30.410 "config": [ 00:04:30.410 { 00:04:30.410 "method": "framework_set_scheduler", 00:04:30.410 "params": { 00:04:30.410 "name": "static" 00:04:30.410 } 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "vhost_scsi", 00:04:30.410 "config": [] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "vhost_blk", 00:04:30.410 "config": [] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "ublk", 00:04:30.410 "config": [] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "nbd", 00:04:30.410 "config": [] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "nvmf", 00:04:30.410 "config": [ 00:04:30.410 { 00:04:30.410 "method": "nvmf_set_config", 00:04:30.410 "params": { 00:04:30.410 "discovery_filter": "match_any", 00:04:30.410 "admin_cmd_passthru": { 00:04:30.410 "identify_ctrlr": false 00:04:30.410 }, 00:04:30.410 "dhchap_digests": [ 00:04:30.410 "sha256", 00:04:30.410 "sha384", 00:04:30.410 "sha512" 00:04:30.410 ], 00:04:30.410 "dhchap_dhgroups": [ 00:04:30.410 "null", 00:04:30.410 "ffdhe2048", 00:04:30.410 "ffdhe3072", 00:04:30.410 "ffdhe4096", 00:04:30.410 "ffdhe6144", 00:04:30.410 "ffdhe8192" 00:04:30.410 ] 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "nvmf_set_max_subsystems", 00:04:30.410 "params": { 00:04:30.410 "max_subsystems": 1024 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "nvmf_set_crdt", 00:04:30.410 "params": { 00:04:30.410 "crdt1": 0, 00:04:30.410 "crdt2": 0, 00:04:30.410 "crdt3": 0 00:04:30.410 } 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "method": "nvmf_create_transport", 00:04:30.410 "params": { 00:04:30.410 "trtype": "TCP", 00:04:30.410 "max_queue_depth": 128, 00:04:30.410 "max_io_qpairs_per_ctrlr": 127, 00:04:30.410 "in_capsule_data_size": 4096, 00:04:30.410 "max_io_size": 131072, 00:04:30.410 "io_unit_size": 131072, 00:04:30.410 "max_aq_depth": 128, 00:04:30.410 "num_shared_buffers": 511, 00:04:30.410 "buf_cache_size": 4294967295, 00:04:30.410 "dif_insert_or_strip": false, 00:04:30.410 "zcopy": false, 00:04:30.410 "c2h_success": true, 00:04:30.410 "sock_priority": 0, 00:04:30.410 "abort_timeout_sec": 1, 00:04:30.410 "ack_timeout": 0, 00:04:30.410 "data_wr_pool_size": 0 00:04:30.410 } 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 }, 00:04:30.410 { 00:04:30.410 "subsystem": "iscsi", 00:04:30.410 "config": [ 00:04:30.410 { 00:04:30.410 "method": "iscsi_set_options", 00:04:30.410 "params": { 00:04:30.410 "node_base": "iqn.2016-06.io.spdk", 00:04:30.410 "max_sessions": 128, 00:04:30.410 "max_connections_per_session": 2, 00:04:30.410 "max_queue_depth": 64, 00:04:30.410 "default_time2wait": 2, 00:04:30.410 "default_time2retain": 20, 00:04:30.410 "first_burst_length": 8192, 00:04:30.410 "immediate_data": true, 00:04:30.410 "allow_duplicated_isid": false, 00:04:30.410 "error_recovery_level": 0, 00:04:30.410 "nop_timeout": 60, 00:04:30.410 "nop_in_interval": 30, 00:04:30.410 "disable_chap": false, 00:04:30.410 "require_chap": false, 00:04:30.410 "mutual_chap": false, 00:04:30.410 "chap_group": 0, 00:04:30.410 "max_large_datain_per_connection": 64, 00:04:30.410 "max_r2t_per_connection": 4, 00:04:30.410 "pdu_pool_size": 36864, 00:04:30.410 "immediate_data_pool_size": 16384, 00:04:30.410 "data_out_pool_size": 2048 00:04:30.410 } 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 } 00:04:30.410 ] 00:04:30.410 } 00:04:30.410 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.410 17:21:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57441 00:04:30.410 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57441 ']' 00:04:30.410 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57441 00:04:30.410 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:30.411 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57441 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.670 killing process with pid 57441 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57441' 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57441 00:04:30.670 17:21:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57441 00:04:32.047 17:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57476 00:04:32.047 17:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.047 17:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57476 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57476 ']' 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57476 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57476 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.320 killing process with pid 57476 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57476' 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57476 00:04:37.320 17:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57476 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.256 00:04:38.256 real 0m8.700s 00:04:38.256 user 0m8.326s 00:04:38.256 sys 0m0.582s 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.256 ************************************ 00:04:38.256 END TEST skip_rpc_with_json 00:04:38.256 ************************************ 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.256 17:21:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.256 17:21:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.256 17:21:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.256 17:21:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.256 ************************************ 00:04:38.256 START TEST skip_rpc_with_delay 00:04:38.256 ************************************ 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.256 [2024-12-07 17:21:11.540272] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.256 00:04:38.256 real 0m0.131s 00:04:38.256 user 0m0.071s 00:04:38.256 sys 0m0.059s 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.256 17:21:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.256 ************************************ 00:04:38.256 END TEST skip_rpc_with_delay 00:04:38.256 ************************************ 00:04:38.256 17:21:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.518 17:21:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.518 17:21:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.518 17:21:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.518 17:21:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.518 17:21:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.518 ************************************ 00:04:38.518 START TEST exit_on_failed_rpc_init 00:04:38.518 ************************************ 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57599 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57599 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57599 ']' 00:04:38.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.518 17:21:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.518 [2024-12-07 17:21:11.760425] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:38.518 [2024-12-07 17:21:11.760866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57599 ] 00:04:38.779 [2024-12-07 17:21:11.929775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.779 [2024-12-07 17:21:12.090970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.720 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.721 17:21:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.721 [2024-12-07 17:21:12.933044] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:39.721 [2024-12-07 17:21:12.933169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:04:39.721 [2024-12-07 17:21:13.089190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.981 [2024-12-07 17:21:13.182755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.981 [2024-12-07 17:21:13.182817] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:39.981 [2024-12-07 17:21:13.182830] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:39.981 [2024-12-07 17:21:13.182842] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57599 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57599 ']' 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57599 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57599 00:04:40.241 killing process with pid 57599 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57599' 00:04:40.241 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57599 00:04:40.242 17:21:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57599 00:04:41.626 00:04:41.626 real 0m3.111s 00:04:41.626 user 0m3.231s 00:04:41.626 sys 0m0.592s 00:04:41.626 17:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.626 17:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.626 ************************************ 00:04:41.626 END TEST exit_on_failed_rpc_init 00:04:41.626 ************************************ 00:04:41.626 17:21:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.626 ************************************ 00:04:41.626 END TEST skip_rpc 00:04:41.626 ************************************ 00:04:41.626 00:04:41.626 real 0m18.518s 00:04:41.626 user 0m17.592s 00:04:41.626 sys 0m1.703s 00:04:41.626 17:21:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.626 17:21:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.626 17:21:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.626 17:21:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.626 17:21:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.626 17:21:14 -- common/autotest_common.sh@10 -- # set +x 00:04:41.626 ************************************ 00:04:41.626 START TEST rpc_client 00:04:41.626 ************************************ 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.626 * Looking for test storage... 00:04:41.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.626 17:21:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.626 --rc genhtml_branch_coverage=1 00:04:41.626 --rc genhtml_function_coverage=1 00:04:41.626 --rc genhtml_legend=1 00:04:41.626 --rc geninfo_all_blocks=1 00:04:41.626 --rc geninfo_unexecuted_blocks=1 00:04:41.626 00:04:41.626 ' 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.626 --rc genhtml_branch_coverage=1 00:04:41.626 --rc genhtml_function_coverage=1 00:04:41.626 --rc genhtml_legend=1 00:04:41.626 --rc geninfo_all_blocks=1 00:04:41.626 --rc geninfo_unexecuted_blocks=1 00:04:41.626 00:04:41.626 ' 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.626 --rc genhtml_branch_coverage=1 00:04:41.626 --rc genhtml_function_coverage=1 00:04:41.626 --rc genhtml_legend=1 00:04:41.626 --rc geninfo_all_blocks=1 00:04:41.626 --rc geninfo_unexecuted_blocks=1 00:04:41.626 00:04:41.626 ' 00:04:41.626 17:21:14 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.626 --rc genhtml_branch_coverage=1 00:04:41.626 --rc genhtml_function_coverage=1 00:04:41.626 --rc genhtml_legend=1 00:04:41.626 --rc geninfo_all_blocks=1 00:04:41.626 --rc geninfo_unexecuted_blocks=1 00:04:41.626 00:04:41.626 ' 00:04:41.626 17:21:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.992 OK 00:04:41.992 17:21:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.992 00:04:41.992 real 0m0.186s 00:04:41.992 user 0m0.103s 00:04:41.992 sys 0m0.091s 00:04:41.992 17:21:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.992 ************************************ 00:04:41.992 17:21:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.992 END TEST rpc_client 00:04:41.992 ************************************ 00:04:41.992 17:21:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.992 17:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.992 17:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.992 17:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.992 ************************************ 00:04:41.992 START TEST json_config 00:04:41.992 ************************************ 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.992 17:21:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.992 17:21:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.992 17:21:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.992 17:21:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.992 17:21:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.992 17:21:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.992 17:21:15 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.992 17:21:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.992 17:21:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.992 17:21:15 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.992 17:21:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.992 17:21:15 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.992 17:21:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.992 17:21:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.992 17:21:15 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.992 --rc genhtml_branch_coverage=1 00:04:41.992 --rc genhtml_function_coverage=1 00:04:41.992 --rc genhtml_legend=1 00:04:41.992 --rc geninfo_all_blocks=1 00:04:41.992 --rc geninfo_unexecuted_blocks=1 00:04:41.992 00:04:41.992 ' 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.992 --rc genhtml_branch_coverage=1 00:04:41.992 --rc genhtml_function_coverage=1 00:04:41.992 --rc genhtml_legend=1 00:04:41.992 --rc geninfo_all_blocks=1 00:04:41.992 --rc geninfo_unexecuted_blocks=1 00:04:41.992 00:04:41.992 ' 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.992 --rc genhtml_branch_coverage=1 00:04:41.992 --rc genhtml_function_coverage=1 00:04:41.992 --rc genhtml_legend=1 00:04:41.992 --rc geninfo_all_blocks=1 00:04:41.992 --rc geninfo_unexecuted_blocks=1 00:04:41.992 00:04:41.992 ' 00:04:41.992 17:21:15 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.992 --rc genhtml_branch_coverage=1 00:04:41.992 --rc genhtml_function_coverage=1 00:04:41.992 --rc genhtml_legend=1 00:04:41.992 --rc geninfo_all_blocks=1 00:04:41.992 --rc geninfo_unexecuted_blocks=1 00:04:41.992 00:04:41.992 ' 00:04:41.992 17:21:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6a1993f-f9da-47c9-a274-8812dd505b00 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b6a1993f-f9da-47c9-a274-8812dd505b00 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.992 17:21:15 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.992 17:21:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.992 17:21:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.992 17:21:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.992 17:21:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.992 17:21:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.992 17:21:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.992 17:21:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.992 17:21:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.993 17:21:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.993 17:21:15 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.993 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.993 17:21:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.993 00:04:41.993 real 0m0.146s 00:04:41.993 user 0m0.092s 00:04:41.993 sys 0m0.050s 00:04:41.993 17:21:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.993 17:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.993 ************************************ 00:04:41.993 END TEST json_config 00:04:41.993 ************************************ 00:04:41.993 17:21:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.993 17:21:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.993 17:21:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.993 17:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:41.993 ************************************ 00:04:41.993 START TEST json_config_extra_key 00:04:41.993 ************************************ 00:04:41.993 17:21:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.277 --rc genhtml_branch_coverage=1 00:04:42.277 --rc genhtml_function_coverage=1 00:04:42.277 --rc genhtml_legend=1 00:04:42.277 --rc geninfo_all_blocks=1 00:04:42.277 --rc geninfo_unexecuted_blocks=1 00:04:42.277 00:04:42.277 ' 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.277 --rc genhtml_branch_coverage=1 00:04:42.277 --rc genhtml_function_coverage=1 00:04:42.277 --rc genhtml_legend=1 00:04:42.277 --rc geninfo_all_blocks=1 00:04:42.277 --rc geninfo_unexecuted_blocks=1 00:04:42.277 00:04:42.277 ' 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.277 --rc genhtml_branch_coverage=1 00:04:42.277 --rc genhtml_function_coverage=1 00:04:42.277 --rc genhtml_legend=1 00:04:42.277 --rc geninfo_all_blocks=1 00:04:42.277 --rc geninfo_unexecuted_blocks=1 00:04:42.277 00:04:42.277 ' 00:04:42.277 17:21:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.277 --rc genhtml_branch_coverage=1 00:04:42.277 --rc genhtml_function_coverage=1 00:04:42.277 --rc genhtml_legend=1 00:04:42.277 --rc geninfo_all_blocks=1 00:04:42.277 --rc geninfo_unexecuted_blocks=1 00:04:42.277 00:04:42.277 ' 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b6a1993f-f9da-47c9-a274-8812dd505b00 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b6a1993f-f9da-47c9-a274-8812dd505b00 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.277 17:21:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.277 17:21:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.277 17:21:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.277 17:21:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.277 17:21:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.277 17:21:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.277 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.277 17:21:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.277 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.278 INFO: launching applications... 00:04:42.278 17:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57810 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.278 Waiting for target to run... 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.278 17:21:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57810 /var/tmp/spdk_tgt.sock 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57810 ']' 00:04:42.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.278 17:21:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.278 [2024-12-07 17:21:15.523345] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:42.278 [2024-12-07 17:21:15.523456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57810 ] 00:04:42.538 [2024-12-07 17:21:15.839128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.799 [2024-12-07 17:21:15.933281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.369 00:04:43.369 INFO: shutting down applications... 00:04:43.369 17:21:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.369 17:21:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.369 17:21:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.369 17:21:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57810 ]] 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57810 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57810 00:04:43.369 17:21:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.630 17:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.630 17:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.630 17:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57810 00:04:43.630 17:21:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.202 17:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.202 17:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.202 17:21:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57810 00:04:44.202 17:21:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.770 17:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.770 17:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.770 17:21:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57810 00:04:44.770 17:21:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57810 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:45.343 SPDK target shutdown done 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.343 17:21:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.343 Success 00:04:45.343 17:21:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.343 ************************************ 00:04:45.343 END TEST json_config_extra_key 00:04:45.343 ************************************ 00:04:45.343 00:04:45.343 real 0m3.163s 00:04:45.343 user 0m2.640s 00:04:45.343 sys 0m0.411s 00:04:45.343 17:21:18 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.343 17:21:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.343 17:21:18 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.343 17:21:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.343 17:21:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.343 17:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:45.343 ************************************ 00:04:45.343 START TEST alias_rpc 00:04:45.343 ************************************ 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.343 * Looking for test storage... 00:04:45.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.343 17:21:18 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.343 --rc genhtml_branch_coverage=1 00:04:45.343 --rc genhtml_function_coverage=1 00:04:45.343 --rc genhtml_legend=1 00:04:45.343 --rc geninfo_all_blocks=1 00:04:45.343 --rc geninfo_unexecuted_blocks=1 00:04:45.343 00:04:45.343 ' 00:04:45.343 17:21:18 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.343 --rc genhtml_branch_coverage=1 00:04:45.343 --rc genhtml_function_coverage=1 00:04:45.343 --rc genhtml_legend=1 00:04:45.344 --rc geninfo_all_blocks=1 00:04:45.344 --rc geninfo_unexecuted_blocks=1 00:04:45.344 00:04:45.344 ' 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.344 --rc genhtml_branch_coverage=1 00:04:45.344 --rc genhtml_function_coverage=1 00:04:45.344 --rc genhtml_legend=1 00:04:45.344 --rc geninfo_all_blocks=1 00:04:45.344 --rc geninfo_unexecuted_blocks=1 00:04:45.344 00:04:45.344 ' 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.344 --rc genhtml_branch_coverage=1 00:04:45.344 --rc genhtml_function_coverage=1 00:04:45.344 --rc genhtml_legend=1 00:04:45.344 --rc geninfo_all_blocks=1 00:04:45.344 --rc geninfo_unexecuted_blocks=1 00:04:45.344 00:04:45.344 ' 00:04:45.344 17:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.344 17:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57909 00:04:45.344 17:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57909 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57909 ']' 00:04:45.344 17:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.344 17:21:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.606 [2024-12-07 17:21:18.752100] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:45.606 [2024-12-07 17:21:18.752239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57909 ] 00:04:45.606 [2024-12-07 17:21:18.914106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.867 [2024-12-07 17:21:19.050394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.441 17:21:19 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.441 17:21:19 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.441 17:21:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.702 17:21:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57909 00:04:46.703 17:21:19 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57909 ']' 00:04:46.703 17:21:19 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57909 00:04:46.703 17:21:19 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.703 17:21:19 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.703 17:21:19 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57909 00:04:46.703 killing process with pid 57909 00:04:46.703 17:21:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.703 17:21:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.703 17:21:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57909' 00:04:46.703 17:21:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 57909 00:04:46.703 17:21:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 57909 00:04:48.605 ************************************ 00:04:48.605 END TEST alias_rpc 00:04:48.605 ************************************ 00:04:48.605 00:04:48.605 real 0m3.014s 00:04:48.605 user 0m3.040s 00:04:48.605 sys 0m0.514s 00:04:48.605 17:21:21 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.605 17:21:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.605 17:21:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:48.605 17:21:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.605 17:21:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.605 17:21:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.605 17:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:48.605 ************************************ 00:04:48.605 START TEST spdkcli_tcp 00:04:48.605 ************************************ 00:04:48.605 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.605 * Looking for test storage... 00:04:48.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.606 17:21:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.606 --rc genhtml_branch_coverage=1 00:04:48.606 --rc genhtml_function_coverage=1 00:04:48.606 --rc genhtml_legend=1 00:04:48.606 --rc geninfo_all_blocks=1 00:04:48.606 --rc geninfo_unexecuted_blocks=1 00:04:48.606 00:04:48.606 ' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.606 --rc genhtml_branch_coverage=1 00:04:48.606 --rc genhtml_function_coverage=1 00:04:48.606 --rc genhtml_legend=1 00:04:48.606 --rc geninfo_all_blocks=1 00:04:48.606 --rc geninfo_unexecuted_blocks=1 00:04:48.606 00:04:48.606 ' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.606 --rc genhtml_branch_coverage=1 00:04:48.606 --rc genhtml_function_coverage=1 00:04:48.606 --rc genhtml_legend=1 00:04:48.606 --rc geninfo_all_blocks=1 00:04:48.606 --rc geninfo_unexecuted_blocks=1 00:04:48.606 00:04:48.606 ' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.606 --rc genhtml_branch_coverage=1 00:04:48.606 --rc genhtml_function_coverage=1 00:04:48.606 --rc genhtml_legend=1 00:04:48.606 --rc geninfo_all_blocks=1 00:04:48.606 --rc geninfo_unexecuted_blocks=1 00:04:48.606 00:04:48.606 ' 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58005 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58005 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58005 ']' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.606 17:21:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.606 17:21:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.606 [2024-12-07 17:21:21.818166] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:48.606 [2024-12-07 17:21:21.818459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58005 ] 00:04:48.606 [2024-12-07 17:21:21.976152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.863 [2024-12-07 17:21:22.057548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.863 [2024-12-07 17:21:22.057569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.429 17:21:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.429 17:21:22 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:49.429 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58016 00:04:49.429 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:49.429 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.686 [ 00:04:49.686 "bdev_malloc_delete", 00:04:49.686 "bdev_malloc_create", 00:04:49.686 "bdev_null_resize", 00:04:49.686 "bdev_null_delete", 00:04:49.686 "bdev_null_create", 00:04:49.686 "bdev_nvme_cuse_unregister", 00:04:49.686 "bdev_nvme_cuse_register", 00:04:49.686 "bdev_opal_new_user", 00:04:49.686 "bdev_opal_set_lock_state", 00:04:49.686 "bdev_opal_delete", 00:04:49.686 "bdev_opal_get_info", 00:04:49.686 "bdev_opal_create", 00:04:49.686 "bdev_nvme_opal_revert", 00:04:49.686 "bdev_nvme_opal_init", 00:04:49.686 "bdev_nvme_send_cmd", 00:04:49.686 "bdev_nvme_set_keys", 00:04:49.686 "bdev_nvme_get_path_iostat", 00:04:49.686 "bdev_nvme_get_mdns_discovery_info", 00:04:49.686 "bdev_nvme_stop_mdns_discovery", 00:04:49.686 "bdev_nvme_start_mdns_discovery", 00:04:49.686 "bdev_nvme_set_multipath_policy", 00:04:49.686 "bdev_nvme_set_preferred_path", 00:04:49.686 "bdev_nvme_get_io_paths", 00:04:49.686 "bdev_nvme_remove_error_injection", 00:04:49.686 "bdev_nvme_add_error_injection", 00:04:49.686 "bdev_nvme_get_discovery_info", 00:04:49.686 "bdev_nvme_stop_discovery", 00:04:49.686 "bdev_nvme_start_discovery", 00:04:49.686 "bdev_nvme_get_controller_health_info", 00:04:49.686 "bdev_nvme_disable_controller", 00:04:49.686 "bdev_nvme_enable_controller", 00:04:49.686 "bdev_nvme_reset_controller", 00:04:49.686 "bdev_nvme_get_transport_statistics", 00:04:49.686 "bdev_nvme_apply_firmware", 00:04:49.686 "bdev_nvme_detach_controller", 00:04:49.686 "bdev_nvme_get_controllers", 00:04:49.686 "bdev_nvme_attach_controller", 00:04:49.686 "bdev_nvme_set_hotplug", 00:04:49.686 "bdev_nvme_set_options", 00:04:49.686 "bdev_passthru_delete", 00:04:49.686 "bdev_passthru_create", 00:04:49.686 "bdev_lvol_set_parent_bdev", 00:04:49.686 "bdev_lvol_set_parent", 00:04:49.686 "bdev_lvol_check_shallow_copy", 00:04:49.686 "bdev_lvol_start_shallow_copy", 00:04:49.686 "bdev_lvol_grow_lvstore", 00:04:49.686 "bdev_lvol_get_lvols", 00:04:49.686 "bdev_lvol_get_lvstores", 00:04:49.686 "bdev_lvol_delete", 00:04:49.686 "bdev_lvol_set_read_only", 00:04:49.686 "bdev_lvol_resize", 00:04:49.686 "bdev_lvol_decouple_parent", 00:04:49.686 "bdev_lvol_inflate", 00:04:49.686 "bdev_lvol_rename", 00:04:49.686 "bdev_lvol_clone_bdev", 00:04:49.686 "bdev_lvol_clone", 00:04:49.686 "bdev_lvol_snapshot", 00:04:49.686 "bdev_lvol_create", 00:04:49.686 "bdev_lvol_delete_lvstore", 00:04:49.686 "bdev_lvol_rename_lvstore", 00:04:49.686 "bdev_lvol_create_lvstore", 00:04:49.686 "bdev_raid_set_options", 00:04:49.686 "bdev_raid_remove_base_bdev", 00:04:49.686 "bdev_raid_add_base_bdev", 00:04:49.686 "bdev_raid_delete", 00:04:49.686 "bdev_raid_create", 00:04:49.686 "bdev_raid_get_bdevs", 00:04:49.686 "bdev_error_inject_error", 00:04:49.686 "bdev_error_delete", 00:04:49.686 "bdev_error_create", 00:04:49.686 "bdev_split_delete", 00:04:49.686 "bdev_split_create", 00:04:49.686 "bdev_delay_delete", 00:04:49.686 "bdev_delay_create", 00:04:49.686 "bdev_delay_update_latency", 00:04:49.686 "bdev_zone_block_delete", 00:04:49.686 "bdev_zone_block_create", 00:04:49.686 "blobfs_create", 00:04:49.686 "blobfs_detect", 00:04:49.686 "blobfs_set_cache_size", 00:04:49.686 "bdev_xnvme_delete", 00:04:49.686 "bdev_xnvme_create", 00:04:49.686 "bdev_aio_delete", 00:04:49.686 "bdev_aio_rescan", 00:04:49.686 "bdev_aio_create", 00:04:49.686 "bdev_ftl_set_property", 00:04:49.686 "bdev_ftl_get_properties", 00:04:49.686 "bdev_ftl_get_stats", 00:04:49.686 "bdev_ftl_unmap", 00:04:49.686 "bdev_ftl_unload", 00:04:49.686 "bdev_ftl_delete", 00:04:49.686 "bdev_ftl_load", 00:04:49.686 "bdev_ftl_create", 00:04:49.686 "bdev_virtio_attach_controller", 00:04:49.686 "bdev_virtio_scsi_get_devices", 00:04:49.686 "bdev_virtio_detach_controller", 00:04:49.686 "bdev_virtio_blk_set_hotplug", 00:04:49.686 "bdev_iscsi_delete", 00:04:49.686 "bdev_iscsi_create", 00:04:49.686 "bdev_iscsi_set_options", 00:04:49.686 "accel_error_inject_error", 00:04:49.686 "ioat_scan_accel_module", 00:04:49.686 "dsa_scan_accel_module", 00:04:49.686 "iaa_scan_accel_module", 00:04:49.686 "keyring_file_remove_key", 00:04:49.686 "keyring_file_add_key", 00:04:49.686 "keyring_linux_set_options", 00:04:49.686 "fsdev_aio_delete", 00:04:49.686 "fsdev_aio_create", 00:04:49.686 "iscsi_get_histogram", 00:04:49.686 "iscsi_enable_histogram", 00:04:49.686 "iscsi_set_options", 00:04:49.686 "iscsi_get_auth_groups", 00:04:49.686 "iscsi_auth_group_remove_secret", 00:04:49.686 "iscsi_auth_group_add_secret", 00:04:49.686 "iscsi_delete_auth_group", 00:04:49.686 "iscsi_create_auth_group", 00:04:49.686 "iscsi_set_discovery_auth", 00:04:49.686 "iscsi_get_options", 00:04:49.686 "iscsi_target_node_request_logout", 00:04:49.686 "iscsi_target_node_set_redirect", 00:04:49.686 "iscsi_target_node_set_auth", 00:04:49.686 "iscsi_target_node_add_lun", 00:04:49.686 "iscsi_get_stats", 00:04:49.686 "iscsi_get_connections", 00:04:49.686 "iscsi_portal_group_set_auth", 00:04:49.686 "iscsi_start_portal_group", 00:04:49.686 "iscsi_delete_portal_group", 00:04:49.686 "iscsi_create_portal_group", 00:04:49.686 "iscsi_get_portal_groups", 00:04:49.686 "iscsi_delete_target_node", 00:04:49.686 "iscsi_target_node_remove_pg_ig_maps", 00:04:49.686 "iscsi_target_node_add_pg_ig_maps", 00:04:49.686 "iscsi_create_target_node", 00:04:49.686 "iscsi_get_target_nodes", 00:04:49.686 "iscsi_delete_initiator_group", 00:04:49.686 "iscsi_initiator_group_remove_initiators", 00:04:49.686 "iscsi_initiator_group_add_initiators", 00:04:49.686 "iscsi_create_initiator_group", 00:04:49.686 "iscsi_get_initiator_groups", 00:04:49.686 "nvmf_set_crdt", 00:04:49.686 "nvmf_set_config", 00:04:49.686 "nvmf_set_max_subsystems", 00:04:49.686 "nvmf_stop_mdns_prr", 00:04:49.686 "nvmf_publish_mdns_prr", 00:04:49.686 "nvmf_subsystem_get_listeners", 00:04:49.686 "nvmf_subsystem_get_qpairs", 00:04:49.686 "nvmf_subsystem_get_controllers", 00:04:49.686 "nvmf_get_stats", 00:04:49.686 "nvmf_get_transports", 00:04:49.686 "nvmf_create_transport", 00:04:49.686 "nvmf_get_targets", 00:04:49.686 "nvmf_delete_target", 00:04:49.686 "nvmf_create_target", 00:04:49.686 "nvmf_subsystem_allow_any_host", 00:04:49.686 "nvmf_subsystem_set_keys", 00:04:49.686 "nvmf_subsystem_remove_host", 00:04:49.686 "nvmf_subsystem_add_host", 00:04:49.686 "nvmf_ns_remove_host", 00:04:49.686 "nvmf_ns_add_host", 00:04:49.686 "nvmf_subsystem_remove_ns", 00:04:49.686 "nvmf_subsystem_set_ns_ana_group", 00:04:49.686 "nvmf_subsystem_add_ns", 00:04:49.686 "nvmf_subsystem_listener_set_ana_state", 00:04:49.686 "nvmf_discovery_get_referrals", 00:04:49.686 "nvmf_discovery_remove_referral", 00:04:49.686 "nvmf_discovery_add_referral", 00:04:49.686 "nvmf_subsystem_remove_listener", 00:04:49.686 "nvmf_subsystem_add_listener", 00:04:49.686 "nvmf_delete_subsystem", 00:04:49.686 "nvmf_create_subsystem", 00:04:49.686 "nvmf_get_subsystems", 00:04:49.686 "env_dpdk_get_mem_stats", 00:04:49.686 "nbd_get_disks", 00:04:49.686 "nbd_stop_disk", 00:04:49.686 "nbd_start_disk", 00:04:49.686 "ublk_recover_disk", 00:04:49.686 "ublk_get_disks", 00:04:49.686 "ublk_stop_disk", 00:04:49.686 "ublk_start_disk", 00:04:49.686 "ublk_destroy_target", 00:04:49.686 "ublk_create_target", 00:04:49.686 "virtio_blk_create_transport", 00:04:49.686 "virtio_blk_get_transports", 00:04:49.686 "vhost_controller_set_coalescing", 00:04:49.686 "vhost_get_controllers", 00:04:49.686 "vhost_delete_controller", 00:04:49.686 "vhost_create_blk_controller", 00:04:49.686 "vhost_scsi_controller_remove_target", 00:04:49.686 "vhost_scsi_controller_add_target", 00:04:49.686 "vhost_start_scsi_controller", 00:04:49.686 "vhost_create_scsi_controller", 00:04:49.686 "thread_set_cpumask", 00:04:49.686 "scheduler_set_options", 00:04:49.686 "framework_get_governor", 00:04:49.686 "framework_get_scheduler", 00:04:49.686 "framework_set_scheduler", 00:04:49.686 "framework_get_reactors", 00:04:49.686 "thread_get_io_channels", 00:04:49.686 "thread_get_pollers", 00:04:49.686 "thread_get_stats", 00:04:49.686 "framework_monitor_context_switch", 00:04:49.686 "spdk_kill_instance", 00:04:49.687 "log_enable_timestamps", 00:04:49.687 "log_get_flags", 00:04:49.687 "log_clear_flag", 00:04:49.687 "log_set_flag", 00:04:49.687 "log_get_level", 00:04:49.687 "log_set_level", 00:04:49.687 "log_get_print_level", 00:04:49.687 "log_set_print_level", 00:04:49.687 "framework_enable_cpumask_locks", 00:04:49.687 "framework_disable_cpumask_locks", 00:04:49.687 "framework_wait_init", 00:04:49.687 "framework_start_init", 00:04:49.687 "scsi_get_devices", 00:04:49.687 "bdev_get_histogram", 00:04:49.687 "bdev_enable_histogram", 00:04:49.687 "bdev_set_qos_limit", 00:04:49.687 "bdev_set_qd_sampling_period", 00:04:49.687 "bdev_get_bdevs", 00:04:49.687 "bdev_reset_iostat", 00:04:49.687 "bdev_get_iostat", 00:04:49.687 "bdev_examine", 00:04:49.687 "bdev_wait_for_examine", 00:04:49.687 "bdev_set_options", 00:04:49.687 "accel_get_stats", 00:04:49.687 "accel_set_options", 00:04:49.687 "accel_set_driver", 00:04:49.687 "accel_crypto_key_destroy", 00:04:49.687 "accel_crypto_keys_get", 00:04:49.687 "accel_crypto_key_create", 00:04:49.687 "accel_assign_opc", 00:04:49.687 "accel_get_module_info", 00:04:49.687 "accel_get_opc_assignments", 00:04:49.687 "vmd_rescan", 00:04:49.687 "vmd_remove_device", 00:04:49.687 "vmd_enable", 00:04:49.687 "sock_get_default_impl", 00:04:49.687 "sock_set_default_impl", 00:04:49.687 "sock_impl_set_options", 00:04:49.687 "sock_impl_get_options", 00:04:49.687 "iobuf_get_stats", 00:04:49.687 "iobuf_set_options", 00:04:49.687 "keyring_get_keys", 00:04:49.687 "framework_get_pci_devices", 00:04:49.687 "framework_get_config", 00:04:49.687 "framework_get_subsystems", 00:04:49.687 "fsdev_set_opts", 00:04:49.687 "fsdev_get_opts", 00:04:49.687 "trace_get_info", 00:04:49.687 "trace_get_tpoint_group_mask", 00:04:49.687 "trace_disable_tpoint_group", 00:04:49.687 "trace_enable_tpoint_group", 00:04:49.687 "trace_clear_tpoint_mask", 00:04:49.687 "trace_set_tpoint_mask", 00:04:49.687 "notify_get_notifications", 00:04:49.687 "notify_get_types", 00:04:49.687 "spdk_get_version", 00:04:49.687 "rpc_get_methods" 00:04:49.687 ] 00:04:49.687 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.687 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:49.687 17:21:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58005 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58005 ']' 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58005 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58005 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.687 killing process with pid 58005 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58005' 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58005 00:04:49.687 17:21:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58005 00:04:51.063 ************************************ 00:04:51.063 END TEST spdkcli_tcp 00:04:51.063 ************************************ 00:04:51.063 00:04:51.063 real 0m2.607s 00:04:51.063 user 0m4.690s 00:04:51.063 sys 0m0.445s 00:04:51.063 17:21:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.063 17:21:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.063 17:21:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.063 17:21:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.063 17:21:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.063 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:51.063 ************************************ 00:04:51.063 START TEST dpdk_mem_utility 00:04:51.063 ************************************ 00:04:51.063 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.063 * Looking for test storage... 00:04:51.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.063 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.063 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:51.063 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.063 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.063 17:21:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.064 17:21:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:51.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.064 --rc genhtml_branch_coverage=1 00:04:51.064 --rc genhtml_function_coverage=1 00:04:51.064 --rc genhtml_legend=1 00:04:51.064 --rc geninfo_all_blocks=1 00:04:51.064 --rc geninfo_unexecuted_blocks=1 00:04:51.064 00:04:51.064 ' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:51.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.064 --rc genhtml_branch_coverage=1 00:04:51.064 --rc genhtml_function_coverage=1 00:04:51.064 --rc genhtml_legend=1 00:04:51.064 --rc geninfo_all_blocks=1 00:04:51.064 --rc geninfo_unexecuted_blocks=1 00:04:51.064 00:04:51.064 ' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:51.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.064 --rc genhtml_branch_coverage=1 00:04:51.064 --rc genhtml_function_coverage=1 00:04:51.064 --rc genhtml_legend=1 00:04:51.064 --rc geninfo_all_blocks=1 00:04:51.064 --rc geninfo_unexecuted_blocks=1 00:04:51.064 00:04:51.064 ' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:51.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.064 --rc genhtml_branch_coverage=1 00:04:51.064 --rc genhtml_function_coverage=1 00:04:51.064 --rc genhtml_legend=1 00:04:51.064 --rc geninfo_all_blocks=1 00:04:51.064 --rc geninfo_unexecuted_blocks=1 00:04:51.064 00:04:51.064 ' 00:04:51.064 17:21:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.064 17:21:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58109 00:04:51.064 17:21:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58109 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58109 ']' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.064 17:21:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.064 17:21:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.322 [2024-12-07 17:21:24.474001] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:51.322 [2024-12-07 17:21:24.474161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58109 ] 00:04:51.322 [2024-12-07 17:21:24.633504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.581 [2024-12-07 17:21:24.737369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.148 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.148 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:52.148 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.148 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.148 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.148 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.148 { 00:04:52.148 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.148 } 00:04:52.148 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.148 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.148 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:52.148 1 heaps totaling size 824.000000 MiB 00:04:52.148 size: 824.000000 MiB heap id: 0 00:04:52.148 end heaps---------- 00:04:52.148 9 mempools totaling size 603.782043 MiB 00:04:52.148 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.148 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.148 size: 100.555481 MiB name: bdev_io_58109 00:04:52.148 size: 50.003479 MiB name: msgpool_58109 00:04:52.148 size: 36.509338 MiB name: fsdev_io_58109 00:04:52.148 size: 21.763794 MiB name: PDU_Pool 00:04:52.148 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.148 size: 4.133484 MiB name: evtpool_58109 00:04:52.148 size: 0.026123 MiB name: Session_Pool 00:04:52.148 end mempools------- 00:04:52.148 6 memzones totaling size 4.142822 MiB 00:04:52.148 size: 1.000366 MiB name: RG_ring_0_58109 00:04:52.148 size: 1.000366 MiB name: RG_ring_1_58109 00:04:52.148 size: 1.000366 MiB name: RG_ring_4_58109 00:04:52.148 size: 1.000366 MiB name: RG_ring_5_58109 00:04:52.148 size: 0.125366 MiB name: RG_ring_2_58109 00:04:52.148 size: 0.015991 MiB name: RG_ring_3_58109 00:04:52.148 end memzones------- 00:04:52.148 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.148 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:04:52.148 list of free elements. size: 16.778687 MiB 00:04:52.148 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:52.148 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:52.148 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:52.148 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:52.148 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:52.148 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:52.148 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:52.148 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:52.148 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:52.148 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:52.148 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:52.148 element at address: 0x20001b400000 with size: 0.559021 MiB 00:04:52.148 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:52.148 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:52.148 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:52.148 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:52.148 element at address: 0x200028800000 with size: 0.391663 MiB 00:04:52.148 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:52.148 list of standard malloc elements. size: 199.290405 MiB 00:04:52.148 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:52.148 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:52.148 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:52.148 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:52.148 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:52.148 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:52.148 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:52.148 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:52.148 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:52.148 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:52.148 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:52.148 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:52.148 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:52.148 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:52.149 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:52.150 element at address: 0x200028864440 with size: 0.000244 MiB 00:04:52.150 element at address: 0x200028864540 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b200 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:52.150 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:52.150 list of memzone associated elements. size: 607.930908 MiB 00:04:52.150 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:52.150 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.150 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:52.150 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.150 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:52.150 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58109_0 00:04:52.150 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:52.150 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58109_0 00:04:52.150 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:52.150 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58109_0 00:04:52.150 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:52.150 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.150 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:52.150 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.150 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:52.150 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58109_0 00:04:52.150 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:52.150 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58109 00:04:52.150 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:52.150 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58109 00:04:52.150 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.150 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.150 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:52.150 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.150 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:52.150 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.150 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58109 00:04:52.150 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58109 00:04:52.150 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58109 00:04:52.150 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:52.150 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58109 00:04:52.150 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58109 00:04:52.150 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58109 00:04:52.150 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:52.150 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.150 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:52.151 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.151 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:52.151 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.151 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:52.151 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58109 00:04:52.151 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:52.151 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58109 00:04:52.151 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:52.151 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.151 element at address: 0x200028864640 with size: 0.023804 MiB 00:04:52.151 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.151 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:52.151 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58109 00:04:52.151 element at address: 0x20002886a7c0 with size: 0.002502 MiB 00:04:52.151 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.151 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:52.151 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58109 00:04:52.151 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:52.151 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58109 00:04:52.151 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:52.151 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58109 00:04:52.151 element at address: 0x20002886b300 with size: 0.000366 MiB 00:04:52.151 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.151 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.151 17:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58109 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58109 ']' 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58109 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58109 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.151 killing process with pid 58109 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58109' 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58109 00:04:52.151 17:21:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58109 00:04:53.523 00:04:53.523 real 0m2.617s 00:04:53.523 user 0m2.650s 00:04:53.523 sys 0m0.450s 00:04:53.523 ************************************ 00:04:53.523 END TEST dpdk_mem_utility 00:04:53.523 ************************************ 00:04:53.523 17:21:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.523 17:21:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.523 17:21:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.523 17:21:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.523 17:21:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.523 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:53.523 ************************************ 00:04:53.523 START TEST event 00:04:53.523 ************************************ 00:04:53.523 17:21:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.781 * Looking for test storage... 00:04:53.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.781 17:21:26 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.781 17:21:26 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.781 17:21:26 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.781 17:21:27 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.781 17:21:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.781 17:21:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.781 17:21:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.781 17:21:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.781 17:21:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.781 17:21:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.781 17:21:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.781 17:21:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.781 17:21:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.781 17:21:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.781 17:21:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.781 17:21:27 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.781 17:21:27 event -- scripts/common.sh@345 -- # : 1 00:04:53.781 17:21:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.781 17:21:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.781 17:21:27 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.781 17:21:27 event -- scripts/common.sh@353 -- # local d=1 00:04:53.781 17:21:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.781 17:21:27 event -- scripts/common.sh@355 -- # echo 1 00:04:53.781 17:21:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.781 17:21:27 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.781 17:21:27 event -- scripts/common.sh@353 -- # local d=2 00:04:53.781 17:21:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.781 17:21:27 event -- scripts/common.sh@355 -- # echo 2 00:04:53.781 17:21:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.781 17:21:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.781 17:21:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.781 17:21:27 event -- scripts/common.sh@368 -- # return 0 00:04:53.781 17:21:27 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.781 17:21:27 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.781 --rc genhtml_branch_coverage=1 00:04:53.781 --rc genhtml_function_coverage=1 00:04:53.781 --rc genhtml_legend=1 00:04:53.781 --rc geninfo_all_blocks=1 00:04:53.781 --rc geninfo_unexecuted_blocks=1 00:04:53.781 00:04:53.781 ' 00:04:53.781 17:21:27 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.782 --rc genhtml_branch_coverage=1 00:04:53.782 --rc genhtml_function_coverage=1 00:04:53.782 --rc genhtml_legend=1 00:04:53.782 --rc geninfo_all_blocks=1 00:04:53.782 --rc geninfo_unexecuted_blocks=1 00:04:53.782 00:04:53.782 ' 00:04:53.782 17:21:27 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.782 --rc genhtml_branch_coverage=1 00:04:53.782 --rc genhtml_function_coverage=1 00:04:53.782 --rc genhtml_legend=1 00:04:53.782 --rc geninfo_all_blocks=1 00:04:53.782 --rc geninfo_unexecuted_blocks=1 00:04:53.782 00:04:53.782 ' 00:04:53.782 17:21:27 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.782 --rc genhtml_branch_coverage=1 00:04:53.782 --rc genhtml_function_coverage=1 00:04:53.782 --rc genhtml_legend=1 00:04:53.782 --rc geninfo_all_blocks=1 00:04:53.782 --rc geninfo_unexecuted_blocks=1 00:04:53.782 00:04:53.782 ' 00:04:53.782 17:21:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.782 17:21:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.782 17:21:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.782 17:21:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:53.782 17:21:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.782 17:21:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.782 ************************************ 00:04:53.782 START TEST event_perf 00:04:53.782 ************************************ 00:04:53.782 17:21:27 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.782 Running I/O for 1 seconds...[2024-12-07 17:21:27.078499] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:53.782 [2024-12-07 17:21:27.078952] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58202 ] 00:04:54.040 [2024-12-07 17:21:27.236873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.040 [2024-12-07 17:21:27.340325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.040 [2024-12-07 17:21:27.340663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.040 [2024-12-07 17:21:27.340640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.040 Running I/O for 1 seconds...[2024-12-07 17:21:27.340532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.414 00:04:55.414 lcore 0: 206019 00:04:55.414 lcore 1: 206018 00:04:55.414 lcore 2: 206018 00:04:55.414 lcore 3: 206020 00:04:55.414 done. 00:04:55.414 00:04:55.414 real 0m1.454s 00:04:55.414 user 0m4.259s 00:04:55.414 sys 0m0.075s 00:04:55.414 17:21:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.414 17:21:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 ************************************ 00:04:55.414 END TEST event_perf 00:04:55.414 ************************************ 00:04:55.414 17:21:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.414 17:21:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.414 17:21:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.414 17:21:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.414 ************************************ 00:04:55.414 START TEST event_reactor 00:04:55.414 ************************************ 00:04:55.414 17:21:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.414 [2024-12-07 17:21:28.564126] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:55.414 [2024-12-07 17:21:28.564236] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ] 00:04:55.414 [2024-12-07 17:21:28.724285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.672 [2024-12-07 17:21:28.824177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.613 test_start 00:04:56.613 oneshot 00:04:56.613 tick 100 00:04:56.613 tick 100 00:04:56.613 tick 250 00:04:56.614 tick 100 00:04:56.614 tick 100 00:04:56.614 tick 100 00:04:56.614 tick 250 00:04:56.614 tick 500 00:04:56.614 tick 100 00:04:56.614 tick 100 00:04:56.614 tick 250 00:04:56.614 tick 100 00:04:56.614 tick 100 00:04:56.614 test_end 00:04:56.614 00:04:56.614 real 0m1.446s 00:04:56.614 user 0m1.266s 00:04:56.614 sys 0m0.071s 00:04:56.614 17:21:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.614 17:21:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.614 ************************************ 00:04:56.614 END TEST event_reactor 00:04:56.614 ************************************ 00:04:56.872 17:21:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.872 17:21:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:56.872 17:21:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.872 17:21:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.872 ************************************ 00:04:56.872 START TEST event_reactor_perf 00:04:56.872 ************************************ 00:04:56.872 17:21:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.872 [2024-12-07 17:21:30.049434] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:56.872 [2024-12-07 17:21:30.049537] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58278 ] 00:04:56.872 [2024-12-07 17:21:30.206017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.131 [2024-12-07 17:21:30.302672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.505 test_start 00:04:58.505 test_end 00:04:58.505 Performance: 315402 events per second 00:04:58.505 ************************************ 00:04:58.505 END TEST event_reactor_perf 00:04:58.505 ************************************ 00:04:58.505 00:04:58.505 real 0m1.432s 00:04:58.505 user 0m1.259s 00:04:58.505 sys 0m0.065s 00:04:58.505 17:21:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.505 17:21:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.505 17:21:31 event -- event/event.sh@49 -- # uname -s 00:04:58.505 17:21:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.505 17:21:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.505 17:21:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.505 17:21:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.505 17:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.505 ************************************ 00:04:58.505 START TEST event_scheduler 00:04:58.505 ************************************ 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.505 * Looking for test storage... 00:04:58.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.505 17:21:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.505 --rc genhtml_branch_coverage=1 00:04:58.505 --rc genhtml_function_coverage=1 00:04:58.505 --rc genhtml_legend=1 00:04:58.505 --rc geninfo_all_blocks=1 00:04:58.505 --rc geninfo_unexecuted_blocks=1 00:04:58.505 00:04:58.505 ' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.505 --rc genhtml_branch_coverage=1 00:04:58.505 --rc genhtml_function_coverage=1 00:04:58.505 --rc genhtml_legend=1 00:04:58.505 --rc geninfo_all_blocks=1 00:04:58.505 --rc geninfo_unexecuted_blocks=1 00:04:58.505 00:04:58.505 ' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.505 --rc genhtml_branch_coverage=1 00:04:58.505 --rc genhtml_function_coverage=1 00:04:58.505 --rc genhtml_legend=1 00:04:58.505 --rc geninfo_all_blocks=1 00:04:58.505 --rc geninfo_unexecuted_blocks=1 00:04:58.505 00:04:58.505 ' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.505 --rc genhtml_branch_coverage=1 00:04:58.505 --rc genhtml_function_coverage=1 00:04:58.505 --rc genhtml_legend=1 00:04:58.505 --rc geninfo_all_blocks=1 00:04:58.505 --rc geninfo_unexecuted_blocks=1 00:04:58.505 00:04:58.505 ' 00:04:58.505 17:21:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.505 17:21:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58348 00:04:58.505 17:21:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.505 17:21:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58348 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58348 ']' 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.505 17:21:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.506 17:21:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.506 17:21:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.506 17:21:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.506 17:21:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.506 [2024-12-07 17:21:31.703913] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:04:58.506 [2024-12-07 17:21:31.704042] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58348 ] 00:04:58.506 [2024-12-07 17:21:31.863161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.763 [2024-12-07 17:21:31.966449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.763 [2024-12-07 17:21:31.966654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.763 [2024-12-07 17:21:31.966878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.763 [2024-12-07 17:21:31.966879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:59.331 17:21:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.331 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.331 POWER: Cannot set governor of lcore 0 to performance 00:04:59.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.331 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.331 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.331 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:59.331 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:59.331 POWER: Unable to set Power Management Environment for lcore 0 00:04:59.331 [2024-12-07 17:21:32.548312] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:59.331 [2024-12-07 17:21:32.548331] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:59.331 [2024-12-07 17:21:32.548340] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.331 [2024-12-07 17:21:32.548356] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.331 [2024-12-07 17:21:32.548364] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.331 [2024-12-07 17:21:32.548372] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.331 17:21:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.331 17:21:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.607 [2024-12-07 17:21:32.774378] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.607 17:21:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.608 17:21:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.608 17:21:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 ************************************ 00:04:59.608 START TEST scheduler_create_thread 00:04:59.608 ************************************ 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 2 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 3 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 4 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 5 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 6 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 7 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 8 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 9 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 10 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.608 17:21:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.219 ************************************ 00:05:00.219 END TEST scheduler_create_thread 00:05:00.219 ************************************ 00:05:00.219 17:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.219 00:05:00.219 real 0m0.595s 00:05:00.219 user 0m0.010s 00:05:00.219 sys 0m0.007s 00:05:00.219 17:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.219 17:21:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.219 17:21:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:00.219 17:21:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58348 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58348 ']' 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58348 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58348 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.219 killing process with pid 58348 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58348' 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58348 00:05:00.219 17:21:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58348 00:05:00.785 [2024-12-07 17:21:33.859334] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:01.352 ************************************ 00:05:01.352 END TEST event_scheduler 00:05:01.352 ************************************ 00:05:01.352 00:05:01.352 real 0m2.942s 00:05:01.352 user 0m5.579s 00:05:01.352 sys 0m0.342s 00:05:01.352 17:21:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.352 17:21:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.352 17:21:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:01.352 17:21:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:01.352 17:21:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.352 17:21:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.352 17:21:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.352 ************************************ 00:05:01.352 START TEST app_repeat 00:05:01.353 ************************************ 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58427 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.353 Process app_repeat pid: 58427 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58427' 00:05:01.353 spdk_app_start Round 0 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58427 /var/tmp/spdk-nbd.sock 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58427 ']' 00:05:01.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.353 17:21:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.353 17:21:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.353 [2024-12-07 17:21:34.525230] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:01.353 [2024-12-07 17:21:34.525338] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58427 ] 00:05:01.353 [2024-12-07 17:21:34.685440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.612 [2024-12-07 17:21:34.783005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.612 [2024-12-07 17:21:34.783042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.178 17:21:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.178 17:21:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.178 17:21:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.435 Malloc0 00:05:02.435 17:21:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.693 Malloc1 00:05:02.693 17:21:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.693 17:21:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.951 /dev/nbd0 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.951 1+0 records in 00:05:02.951 1+0 records out 00:05:02.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191536 s, 21.4 MB/s 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.951 /dev/nbd1 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.951 17:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.951 17:21:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.210 1+0 records in 00:05:03.210 1+0 records out 00:05:03.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147571 s, 27.8 MB/s 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.210 17:21:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.210 { 00:05:03.210 "nbd_device": "/dev/nbd0", 00:05:03.210 "bdev_name": "Malloc0" 00:05:03.210 }, 00:05:03.210 { 00:05:03.210 "nbd_device": "/dev/nbd1", 00:05:03.210 "bdev_name": "Malloc1" 00:05:03.210 } 00:05:03.210 ]' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.210 { 00:05:03.210 "nbd_device": "/dev/nbd0", 00:05:03.210 "bdev_name": "Malloc0" 00:05:03.210 }, 00:05:03.210 { 00:05:03.210 "nbd_device": "/dev/nbd1", 00:05:03.210 "bdev_name": "Malloc1" 00:05:03.210 } 00:05:03.210 ]' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.210 /dev/nbd1' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.210 /dev/nbd1' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.210 17:21:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.469 256+0 records in 00:05:03.469 256+0 records out 00:05:03.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104182 s, 101 MB/s 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.469 256+0 records in 00:05:03.469 256+0 records out 00:05:03.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210187 s, 49.9 MB/s 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.469 256+0 records in 00:05:03.469 256+0 records out 00:05:03.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220009 s, 47.7 MB/s 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.469 17:21:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.728 17:21:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.728 17:21:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.987 17:21:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.987 17:21:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.247 17:21:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.814 [2024-12-07 17:21:38.181604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.072 [2024-12-07 17:21:38.252201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.072 [2024-12-07 17:21:38.252404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.072 [2024-12-07 17:21:38.348915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.072 [2024-12-07 17:21:38.348973] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.606 spdk_app_start Round 1 00:05:07.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.606 17:21:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.606 17:21:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.606 17:21:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58427 /var/tmp/spdk-nbd.sock 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58427 ']' 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.606 17:21:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.606 17:21:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.864 Malloc0 00:05:07.864 17:21:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.122 Malloc1 00:05:08.122 17:21:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.122 17:21:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.122 /dev/nbd0 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.379 1+0 records in 00:05:08.379 1+0 records out 00:05:08.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376338 s, 10.9 MB/s 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.379 /dev/nbd1 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.379 1+0 records in 00:05:08.379 1+0 records out 00:05:08.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196259 s, 20.9 MB/s 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.379 17:21:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.379 17:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.637 { 00:05:08.637 "nbd_device": "/dev/nbd0", 00:05:08.637 "bdev_name": "Malloc0" 00:05:08.637 }, 00:05:08.637 { 00:05:08.637 "nbd_device": "/dev/nbd1", 00:05:08.637 "bdev_name": "Malloc1" 00:05:08.637 } 00:05:08.637 ]' 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.637 { 00:05:08.637 "nbd_device": "/dev/nbd0", 00:05:08.637 "bdev_name": "Malloc0" 00:05:08.637 }, 00:05:08.637 { 00:05:08.637 "nbd_device": "/dev/nbd1", 00:05:08.637 "bdev_name": "Malloc1" 00:05:08.637 } 00:05:08.637 ]' 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.637 /dev/nbd1' 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.637 17:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.637 /dev/nbd1' 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.637 256+0 records in 00:05:08.637 256+0 records out 00:05:08.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597367 s, 176 MB/s 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.637 17:21:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.895 256+0 records in 00:05:08.895 256+0 records out 00:05:08.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166098 s, 63.1 MB/s 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.895 256+0 records in 00:05:08.895 256+0 records out 00:05:08.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188279 s, 55.7 MB/s 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.895 17:21:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.152 17:21:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.409 17:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.409 17:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.410 17:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.666 17:21:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.666 17:21:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.923 17:21:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.487 [2024-12-07 17:21:43.653307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.487 [2024-12-07 17:21:43.723487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.487 [2024-12-07 17:21:43.723514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.487 [2024-12-07 17:21:43.824697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.487 [2024-12-07 17:21:43.824743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.012 spdk_app_start Round 2 00:05:13.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.012 17:21:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.012 17:21:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:13.012 17:21:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58427 /var/tmp/spdk-nbd.sock 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58427 ']' 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.012 17:21:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.012 17:21:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.270 Malloc0 00:05:13.270 17:21:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.528 Malloc1 00:05:13.528 17:21:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.528 17:21:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.786 /dev/nbd0 00:05:13.786 17:21:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.786 17:21:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.786 1+0 records in 00:05:13.786 1+0 records out 00:05:13.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212081 s, 19.3 MB/s 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.786 17:21:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.786 17:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.786 17:21:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.786 17:21:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.044 /dev/nbd1 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.044 1+0 records in 00:05:14.044 1+0 records out 00:05:14.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241454 s, 17.0 MB/s 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.044 17:21:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.044 { 00:05:14.044 "nbd_device": "/dev/nbd0", 00:05:14.044 "bdev_name": "Malloc0" 00:05:14.044 }, 00:05:14.044 { 00:05:14.044 "nbd_device": "/dev/nbd1", 00:05:14.044 "bdev_name": "Malloc1" 00:05:14.044 } 00:05:14.044 ]' 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.044 { 00:05:14.044 "nbd_device": "/dev/nbd0", 00:05:14.044 "bdev_name": "Malloc0" 00:05:14.044 }, 00:05:14.044 { 00:05:14.044 "nbd_device": "/dev/nbd1", 00:05:14.044 "bdev_name": "Malloc1" 00:05:14.044 } 00:05:14.044 ]' 00:05:14.044 17:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.303 /dev/nbd1' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.303 /dev/nbd1' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.303 256+0 records in 00:05:14.303 256+0 records out 00:05:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00895375 s, 117 MB/s 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.303 256+0 records in 00:05:14.303 256+0 records out 00:05:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147203 s, 71.2 MB/s 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.303 256+0 records in 00:05:14.303 256+0 records out 00:05:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173357 s, 60.5 MB/s 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.303 17:21:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.562 17:21:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.821 17:21:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.821 17:21:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.821 17:21:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.388 17:21:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.646 [2024-12-07 17:21:49.018332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.905 [2024-12-07 17:21:49.088099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.905 [2024-12-07 17:21:49.088182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.905 [2024-12-07 17:21:49.186443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.905 [2024-12-07 17:21:49.186486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.434 17:21:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58427 /var/tmp/spdk-nbd.sock 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58427 ']' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.434 17:21:51 event.app_repeat -- event/event.sh@39 -- # killprocess 58427 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58427 ']' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58427 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58427 00:05:18.434 killing process with pid 58427 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58427' 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58427 00:05:18.434 17:21:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58427 00:05:19.000 spdk_app_start is called in Round 0. 00:05:19.000 Shutdown signal received, stop current app iteration 00:05:19.000 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:19.000 spdk_app_start is called in Round 1. 00:05:19.000 Shutdown signal received, stop current app iteration 00:05:19.000 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:19.000 spdk_app_start is called in Round 2. 00:05:19.000 Shutdown signal received, stop current app iteration 00:05:19.000 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 reinitialization... 00:05:19.000 spdk_app_start is called in Round 3. 00:05:19.000 Shutdown signal received, stop current app iteration 00:05:19.000 17:21:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.000 17:21:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:19.000 00:05:19.000 real 0m17.739s 00:05:19.000 user 0m38.964s 00:05:19.000 sys 0m2.059s 00:05:19.000 17:21:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.000 ************************************ 00:05:19.000 END TEST app_repeat 00:05:19.000 ************************************ 00:05:19.000 17:21:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.000 17:21:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.001 17:21:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.001 17:21:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.001 17:21:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.001 17:21:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.001 ************************************ 00:05:19.001 START TEST cpu_locks 00:05:19.001 ************************************ 00:05:19.001 17:21:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.001 * Looking for test storage... 00:05:19.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.001 17:21:52 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.001 17:21:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.001 17:21:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.259 17:21:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.259 --rc genhtml_branch_coverage=1 00:05:19.259 --rc genhtml_function_coverage=1 00:05:19.259 --rc genhtml_legend=1 00:05:19.259 --rc geninfo_all_blocks=1 00:05:19.259 --rc geninfo_unexecuted_blocks=1 00:05:19.259 00:05:19.259 ' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.259 --rc genhtml_branch_coverage=1 00:05:19.259 --rc genhtml_function_coverage=1 00:05:19.259 --rc genhtml_legend=1 00:05:19.259 --rc geninfo_all_blocks=1 00:05:19.259 --rc geninfo_unexecuted_blocks=1 00:05:19.259 00:05:19.259 ' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.259 --rc genhtml_branch_coverage=1 00:05:19.259 --rc genhtml_function_coverage=1 00:05:19.259 --rc genhtml_legend=1 00:05:19.259 --rc geninfo_all_blocks=1 00:05:19.259 --rc geninfo_unexecuted_blocks=1 00:05:19.259 00:05:19.259 ' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.259 --rc genhtml_branch_coverage=1 00:05:19.259 --rc genhtml_function_coverage=1 00:05:19.259 --rc genhtml_legend=1 00:05:19.259 --rc geninfo_all_blocks=1 00:05:19.259 --rc geninfo_unexecuted_blocks=1 00:05:19.259 00:05:19.259 ' 00:05:19.259 17:21:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.259 17:21:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.259 17:21:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.259 17:21:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.259 17:21:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.259 ************************************ 00:05:19.259 START TEST default_locks 00:05:19.259 ************************************ 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58863 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58863 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.259 17:21:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.259 [2024-12-07 17:21:52.491076] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:19.259 [2024-12-07 17:21:52.491194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:05:19.517 [2024-12-07 17:21:52.651910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.517 [2024-12-07 17:21:52.751786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.114 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.114 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:20.114 17:21:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58863 00:05:20.114 17:21:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58863 00:05:20.114 17:21:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58863 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58863 ']' 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58863 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58863 00:05:20.372 killing process with pid 58863 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58863' 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58863 00:05:20.372 17:21:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58863 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58863 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58863 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:21.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58863 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.749 ERROR: process (pid: 58863) is no longer running 00:05:21.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58863) - No such process 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.749 00:05:21.749 real 0m2.441s 00:05:21.749 user 0m2.435s 00:05:21.749 sys 0m0.437s 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.749 17:21:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.749 ************************************ 00:05:21.749 END TEST default_locks 00:05:21.749 ************************************ 00:05:21.749 17:21:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.749 17:21:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.749 17:21:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.749 17:21:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.749 ************************************ 00:05:21.749 START TEST default_locks_via_rpc 00:05:21.750 ************************************ 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58916 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58916 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58916 ']' 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.750 17:21:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.750 [2024-12-07 17:21:54.972973] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:21.750 [2024-12-07 17:21:54.973099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58916 ] 00:05:21.750 [2024-12-07 17:21:55.127873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.006 [2024-12-07 17:21:55.205937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58916 ']' 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58916 00:05:22.571 killing process with pid 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58916' 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58916 00:05:22.571 17:21:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58916 00:05:23.945 00:05:23.945 real 0m2.179s 00:05:23.945 user 0m2.100s 00:05:23.945 sys 0m0.417s 00:05:23.945 17:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.945 17:21:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.945 ************************************ 00:05:23.945 END TEST default_locks_via_rpc 00:05:23.945 ************************************ 00:05:23.945 17:21:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:23.945 17:21:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.945 17:21:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.945 17:21:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.945 ************************************ 00:05:23.945 START TEST non_locking_app_on_locked_coremask 00:05:23.945 ************************************ 00:05:23.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58968 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58968 /var/tmp/spdk.sock 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58968 ']' 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.945 17:21:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.945 [2024-12-07 17:21:57.194303] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:23.945 [2024-12-07 17:21:57.194403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:05:24.203 [2024-12-07 17:21:57.347768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.203 [2024-12-07 17:21:57.427763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58984 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58984 /var/tmp/spdk2.sock 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58984 ']' 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.769 17:21:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.769 [2024-12-07 17:21:58.083992] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:24.769 [2024-12-07 17:21:58.084285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58984 ] 00:05:25.026 [2024-12-07 17:21:58.253345] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.026 [2024-12-07 17:21:58.253386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.026 [2024-12-07 17:21:58.391203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.961 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.961 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:25.961 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58968 00:05:25.961 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58968 00:05:25.961 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58968 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58968 ']' 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58968 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58968 00:05:26.548 killing process with pid 58968 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58968' 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58968 00:05:26.548 17:21:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58968 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58984 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58984 ']' 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58984 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58984 00:05:29.072 killing process with pid 58984 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58984' 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58984 00:05:29.072 17:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58984 00:05:30.005 00:05:30.005 real 0m6.126s 00:05:30.005 user 0m6.389s 00:05:30.005 sys 0m0.822s 00:05:30.005 17:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.005 ************************************ 00:05:30.005 END TEST non_locking_app_on_locked_coremask 00:05:30.005 ************************************ 00:05:30.005 17:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.005 17:22:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.005 17:22:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.005 17:22:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.005 17:22:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.005 ************************************ 00:05:30.005 START TEST locking_app_on_unlocked_coremask 00:05:30.005 ************************************ 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59075 00:05:30.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59075 /var/tmp/spdk.sock 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59075 ']' 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.005 17:22:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.005 [2024-12-07 17:22:03.360629] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:30.005 [2024-12-07 17:22:03.360746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59075 ] 00:05:30.263 [2024-12-07 17:22:03.515785] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.263 [2024-12-07 17:22:03.515837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.263 [2024-12-07 17:22:03.595888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59091 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59091 /var/tmp/spdk2.sock 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59091 ']' 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.830 17:22:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.088 [2024-12-07 17:22:04.250425] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:31.088 [2024-12-07 17:22:04.250727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59091 ] 00:05:31.088 [2024-12-07 17:22:04.415516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.347 [2024-12-07 17:22:04.574665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.291 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.291 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.291 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59091 00:05:32.291 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59091 00:05:32.291 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59075 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59075 ']' 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59075 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59075 00:05:32.550 killing process with pid 59075 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59075' 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59075 00:05:32.550 17:22:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59075 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59091 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59091 ']' 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59091 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59091 00:05:35.079 killing process with pid 59091 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59091' 00:05:35.079 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59091 00:05:35.080 17:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59091 00:05:36.455 ************************************ 00:05:36.455 END TEST locking_app_on_unlocked_coremask 00:05:36.455 ************************************ 00:05:36.455 00:05:36.455 real 0m6.140s 00:05:36.455 user 0m6.420s 00:05:36.455 sys 0m0.795s 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.455 17:22:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.455 17:22:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.455 17:22:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.455 17:22:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.455 ************************************ 00:05:36.455 START TEST locking_app_on_locked_coremask 00:05:36.455 ************************************ 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59182 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59182 /var/tmp/spdk.sock 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59182 ']' 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.455 17:22:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.455 [2024-12-07 17:22:09.546887] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:36.455 [2024-12-07 17:22:09.547023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:05:36.455 [2024-12-07 17:22:09.703447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.455 [2024-12-07 17:22:09.779425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59198 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59198 /var/tmp/spdk2.sock 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59198 /var/tmp/spdk2.sock 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59198 /var/tmp/spdk2.sock 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59198 ']' 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.020 17:22:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.278 [2024-12-07 17:22:10.446893] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:37.278 [2024-12-07 17:22:10.447208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59198 ] 00:05:37.278 [2024-12-07 17:22:10.613132] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59182 has claimed it. 00:05:37.278 [2024-12-07 17:22:10.613178] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:37.843 ERROR: process (pid: 59198) is no longer running 00:05:37.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59198) - No such process 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59182 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59182 00:05:37.843 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59182 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59182 ']' 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59182 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59182 00:05:38.102 killing process with pid 59182 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59182' 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59182 00:05:38.102 17:22:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59182 00:05:39.474 00:05:39.474 real 0m2.995s 00:05:39.474 user 0m3.188s 00:05:39.474 sys 0m0.568s 00:05:39.474 17:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.474 ************************************ 00:05:39.474 END TEST locking_app_on_locked_coremask 00:05:39.474 ************************************ 00:05:39.474 17:22:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.474 17:22:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.474 17:22:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.474 17:22:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.474 17:22:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.474 ************************************ 00:05:39.474 START TEST locking_overlapped_coremask 00:05:39.474 ************************************ 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59251 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59251 /var/tmp/spdk.sock 00:05:39.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59251 ']' 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.474 17:22:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.474 [2024-12-07 17:22:12.593111] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:39.474 [2024-12-07 17:22:12.593226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59251 ] 00:05:39.474 [2024-12-07 17:22:12.748256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.474 [2024-12-07 17:22:12.826550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.474 [2024-12-07 17:22:12.826748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.474 [2024-12-07 17:22:12.826794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59269 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59269 /var/tmp/spdk2.sock 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59269 /var/tmp/spdk2.sock 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59269 /var/tmp/spdk2.sock 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.407 17:22:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.407 [2024-12-07 17:22:13.487199] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:40.407 [2024-12-07 17:22:13.487771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:05:40.407 [2024-12-07 17:22:13.661227] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59251 has claimed it. 00:05:40.407 [2024-12-07 17:22:13.661276] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.973 ERROR: process (pid: 59269) is no longer running 00:05:40.973 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59269) - No such process 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59251 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59251 ']' 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59251 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59251 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59251' 00:05:40.973 killing process with pid 59251 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59251 00:05:40.973 17:22:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59251 00:05:42.359 00:05:42.359 real 0m2.838s 00:05:42.359 user 0m7.753s 00:05:42.359 sys 0m0.411s 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.359 ************************************ 00:05:42.359 END TEST locking_overlapped_coremask 00:05:42.359 ************************************ 00:05:42.359 17:22:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:42.359 17:22:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.359 17:22:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.359 17:22:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.359 ************************************ 00:05:42.359 START TEST locking_overlapped_coremask_via_rpc 00:05:42.359 ************************************ 00:05:42.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59322 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59322 /var/tmp/spdk.sock 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59322 ']' 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:42.359 17:22:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.359 [2024-12-07 17:22:15.467047] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:42.359 [2024-12-07 17:22:15.467153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59322 ] 00:05:42.359 [2024-12-07 17:22:15.619206] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.359 [2024-12-07 17:22:15.619247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.359 [2024-12-07 17:22:15.699624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.359 [2024-12-07 17:22:15.699772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.359 [2024-12-07 17:22:15.699800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59340 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59340 /var/tmp/spdk2.sock 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.924 17:22:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.182 [2024-12-07 17:22:16.319278] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:43.182 [2024-12-07 17:22:16.319512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59340 ] 00:05:43.182 [2024-12-07 17:22:16.489992] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.182 [2024-12-07 17:22:16.490041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.439 [2024-12-07 17:22:16.698552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.439 [2024-12-07 17:22:16.698708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.439 [2024-12-07 17:22:16.698734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.813 [2024-12-07 17:22:17.814104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59322 has claimed it. 00:05:44.813 request: 00:05:44.813 { 00:05:44.813 "method": "framework_enable_cpumask_locks", 00:05:44.813 "req_id": 1 00:05:44.813 } 00:05:44.813 Got JSON-RPC error response 00:05:44.813 response: 00:05:44.813 { 00:05:44.813 "code": -32603, 00:05:44.813 "message": "Failed to claim CPU core: 2" 00:05:44.813 } 00:05:44.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59322 /var/tmp/spdk.sock 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59322 ']' 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.813 17:22:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59340 /var/tmp/spdk2.sock 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59340 ']' 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.813 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.072 00:05:45.072 real 0m2.850s 00:05:45.072 user 0m0.995s 00:05:45.072 sys 0m0.112s 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.072 17:22:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.072 ************************************ 00:05:45.072 END TEST locking_overlapped_coremask_via_rpc 00:05:45.072 ************************************ 00:05:45.072 17:22:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:45.072 17:22:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59322 ]] 00:05:45.072 17:22:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59322 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59322 ']' 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59322 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59322 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.072 killing process with pid 59322 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59322' 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59322 00:05:45.072 17:22:18 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59322 00:05:46.446 17:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59340 ]] 00:05:46.446 17:22:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59340 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59340 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59340 00:05:46.446 killing process with pid 59340 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59340' 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59340 00:05:46.446 17:22:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59340 00:05:47.379 17:22:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.638 Process with pid 59322 is not found 00:05:47.638 17:22:20 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.638 17:22:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59322 ]] 00:05:47.639 17:22:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59322 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59322 ']' 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59322 00:05:47.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59322) - No such process 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59322 is not found' 00:05:47.639 17:22:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59340 ]] 00:05:47.639 17:22:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59340 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59340 ']' 00:05:47.639 Process with pid 59340 is not found 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59340 00:05:47.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59340) - No such process 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59340 is not found' 00:05:47.639 17:22:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.639 ************************************ 00:05:47.639 END TEST cpu_locks 00:05:47.639 ************************************ 00:05:47.639 00:05:47.639 real 0m28.512s 00:05:47.639 user 0m49.563s 00:05:47.639 sys 0m4.356s 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.639 17:22:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 END TEST event 00:05:47.639 ************************************ 00:05:47.639 00:05:47.639 real 0m53.899s 00:05:47.639 user 1m41.055s 00:05:47.639 sys 0m7.173s 00:05:47.639 17:22:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.639 17:22:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 17:22:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.639 17:22:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.639 17:22:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.639 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 START TEST thread 00:05:47.639 ************************************ 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.639 * Looking for test storage... 00:05:47.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.639 17:22:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.639 17:22:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.639 17:22:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.639 17:22:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.639 17:22:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.639 17:22:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.639 17:22:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.639 17:22:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.639 17:22:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.639 17:22:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.639 17:22:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.639 17:22:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:47.639 17:22:20 thread -- scripts/common.sh@345 -- # : 1 00:05:47.639 17:22:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.639 17:22:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.639 17:22:20 thread -- scripts/common.sh@365 -- # decimal 1 00:05:47.639 17:22:20 thread -- scripts/common.sh@353 -- # local d=1 00:05:47.639 17:22:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.639 17:22:20 thread -- scripts/common.sh@355 -- # echo 1 00:05:47.639 17:22:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.639 17:22:20 thread -- scripts/common.sh@366 -- # decimal 2 00:05:47.639 17:22:20 thread -- scripts/common.sh@353 -- # local d=2 00:05:47.639 17:22:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.639 17:22:20 thread -- scripts/common.sh@355 -- # echo 2 00:05:47.639 17:22:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.639 17:22:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.639 17:22:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.639 17:22:20 thread -- scripts/common.sh@368 -- # return 0 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.639 --rc genhtml_branch_coverage=1 00:05:47.639 --rc genhtml_function_coverage=1 00:05:47.639 --rc genhtml_legend=1 00:05:47.639 --rc geninfo_all_blocks=1 00:05:47.639 --rc geninfo_unexecuted_blocks=1 00:05:47.639 00:05:47.639 ' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.639 --rc genhtml_branch_coverage=1 00:05:47.639 --rc genhtml_function_coverage=1 00:05:47.639 --rc genhtml_legend=1 00:05:47.639 --rc geninfo_all_blocks=1 00:05:47.639 --rc geninfo_unexecuted_blocks=1 00:05:47.639 00:05:47.639 ' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.639 --rc genhtml_branch_coverage=1 00:05:47.639 --rc genhtml_function_coverage=1 00:05:47.639 --rc genhtml_legend=1 00:05:47.639 --rc geninfo_all_blocks=1 00:05:47.639 --rc geninfo_unexecuted_blocks=1 00:05:47.639 00:05:47.639 ' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.639 --rc genhtml_branch_coverage=1 00:05:47.639 --rc genhtml_function_coverage=1 00:05:47.639 --rc genhtml_legend=1 00:05:47.639 --rc geninfo_all_blocks=1 00:05:47.639 --rc geninfo_unexecuted_blocks=1 00:05:47.639 00:05:47.639 ' 00:05:47.639 17:22:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.639 17:22:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 START TEST thread_poller_perf 00:05:47.639 ************************************ 00:05:47.639 17:22:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.639 [2024-12-07 17:22:21.006183] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:47.639 [2024-12-07 17:22:21.006291] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59495 ] 00:05:47.900 [2024-12-07 17:22:21.159758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.900 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:47.900 [2024-12-07 17:22:21.252039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.334 [2024-12-07T17:22:22.716Z] ====================================== 00:05:49.334 [2024-12-07T17:22:22.716Z] busy:2611805784 (cyc) 00:05:49.334 [2024-12-07T17:22:22.716Z] total_run_count: 404000 00:05:49.334 [2024-12-07T17:22:22.716Z] tsc_hz: 2600000000 (cyc) 00:05:49.334 [2024-12-07T17:22:22.716Z] ====================================== 00:05:49.334 [2024-12-07T17:22:22.716Z] poller_cost: 6464 (cyc), 2486 (nsec) 00:05:49.334 00:05:49.334 real 0m1.405s 00:05:49.334 user 0m1.234s 00:05:49.334 sys 0m0.064s 00:05:49.334 17:22:22 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.334 17:22:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.334 ************************************ 00:05:49.334 END TEST thread_poller_perf 00:05:49.334 ************************************ 00:05:49.334 17:22:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.334 17:22:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:49.334 17:22:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.334 17:22:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.334 ************************************ 00:05:49.334 START TEST thread_poller_perf 00:05:49.334 ************************************ 00:05:49.334 17:22:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.334 [2024-12-07 17:22:22.454446] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:49.334 [2024-12-07 17:22:22.454820] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:05:49.334 [2024-12-07 17:22:22.608276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.334 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.334 [2024-12-07 17:22:22.693201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.705 [2024-12-07T17:22:24.087Z] ====================================== 00:05:50.705 [2024-12-07T17:22:24.087Z] busy:2602697494 (cyc) 00:05:50.705 [2024-12-07T17:22:24.087Z] total_run_count: 4889000 00:05:50.705 [2024-12-07T17:22:24.087Z] tsc_hz: 2600000000 (cyc) 00:05:50.705 [2024-12-07T17:22:24.087Z] ====================================== 00:05:50.705 [2024-12-07T17:22:24.087Z] poller_cost: 532 (cyc), 204 (nsec) 00:05:50.705 00:05:50.705 real 0m1.396s 00:05:50.705 user 0m1.224s 00:05:50.705 sys 0m0.065s 00:05:50.705 17:22:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.705 17:22:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 END TEST thread_poller_perf 00:05:50.705 ************************************ 00:05:50.705 17:22:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:50.705 00:05:50.705 real 0m3.008s 00:05:50.705 user 0m2.560s 00:05:50.705 sys 0m0.237s 00:05:50.705 17:22:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.705 17:22:23 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 END TEST thread 00:05:50.705 ************************************ 00:05:50.705 17:22:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:50.705 17:22:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.705 17:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.705 17:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.705 17:22:23 -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 ************************************ 00:05:50.705 START TEST app_cmdline 00:05:50.705 ************************************ 00:05:50.705 17:22:23 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.705 * Looking for test storage... 00:05:50.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:50.705 17:22:23 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.705 17:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.705 17:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.705 17:22:23 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.705 17:22:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.705 17:22:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.705 17:22:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.705 17:22:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.705 --rc genhtml_branch_coverage=1 00:05:50.705 --rc genhtml_function_coverage=1 00:05:50.705 --rc genhtml_legend=1 00:05:50.705 --rc geninfo_all_blocks=1 00:05:50.705 --rc geninfo_unexecuted_blocks=1 00:05:50.705 00:05:50.705 ' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.705 --rc genhtml_branch_coverage=1 00:05:50.705 --rc genhtml_function_coverage=1 00:05:50.705 --rc genhtml_legend=1 00:05:50.705 --rc geninfo_all_blocks=1 00:05:50.705 --rc geninfo_unexecuted_blocks=1 00:05:50.705 00:05:50.705 ' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.705 --rc genhtml_branch_coverage=1 00:05:50.705 --rc genhtml_function_coverage=1 00:05:50.705 --rc genhtml_legend=1 00:05:50.705 --rc geninfo_all_blocks=1 00:05:50.705 --rc geninfo_unexecuted_blocks=1 00:05:50.705 00:05:50.705 ' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.705 --rc genhtml_branch_coverage=1 00:05:50.705 --rc genhtml_function_coverage=1 00:05:50.705 --rc genhtml_legend=1 00:05:50.705 --rc geninfo_all_blocks=1 00:05:50.705 --rc geninfo_unexecuted_blocks=1 00:05:50.705 00:05:50.705 ' 00:05:50.705 17:22:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.705 17:22:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59615 00:05:50.705 17:22:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59615 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59615 ']' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.705 17:22:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.705 17:22:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.962 [2024-12-07 17:22:24.089167] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:50.963 [2024-12-07 17:22:24.089284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59615 ] 00:05:50.963 [2024-12-07 17:22:24.243914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.963 [2024-12-07 17:22:24.337835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.895 17:22:24 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.895 17:22:24 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:51.895 17:22:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:51.895 { 00:05:51.895 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:05:51.895 "fields": { 00:05:51.895 "major": 25, 00:05:51.895 "minor": 1, 00:05:51.895 "patch": 0, 00:05:51.895 "suffix": "-pre", 00:05:51.895 "commit": "a2f5e1c2d" 00:05:51.895 } 00:05:51.895 } 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.895 17:22:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:51.895 17:22:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.153 request: 00:05:52.153 { 00:05:52.153 "method": "env_dpdk_get_mem_stats", 00:05:52.153 "req_id": 1 00:05:52.153 } 00:05:52.153 Got JSON-RPC error response 00:05:52.153 response: 00:05:52.153 { 00:05:52.153 "code": -32601, 00:05:52.153 "message": "Method not found" 00:05:52.153 } 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.153 17:22:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59615 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59615 ']' 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59615 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59615 00:05:52.153 killing process with pid 59615 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59615' 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 59615 00:05:52.153 17:22:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 59615 00:05:53.527 00:05:53.527 real 0m2.693s 00:05:53.527 user 0m2.940s 00:05:53.527 sys 0m0.452s 00:05:53.527 17:22:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.527 ************************************ 00:05:53.527 END TEST app_cmdline 00:05:53.527 ************************************ 00:05:53.527 17:22:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.527 17:22:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:53.527 17:22:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.527 17:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.527 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.527 ************************************ 00:05:53.527 START TEST version 00:05:53.527 ************************************ 00:05:53.527 17:22:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:53.527 * Looking for test storage... 00:05:53.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:53.527 17:22:26 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.527 17:22:26 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.527 17:22:26 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.527 17:22:26 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.527 17:22:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.527 17:22:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.527 17:22:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.527 17:22:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.527 17:22:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.527 17:22:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.527 17:22:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.527 17:22:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.527 17:22:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.527 17:22:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.528 17:22:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.528 17:22:26 version -- scripts/common.sh@344 -- # case "$op" in 00:05:53.528 17:22:26 version -- scripts/common.sh@345 -- # : 1 00:05:53.528 17:22:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.528 17:22:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.528 17:22:26 version -- scripts/common.sh@365 -- # decimal 1 00:05:53.528 17:22:26 version -- scripts/common.sh@353 -- # local d=1 00:05:53.528 17:22:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.528 17:22:26 version -- scripts/common.sh@355 -- # echo 1 00:05:53.528 17:22:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.528 17:22:26 version -- scripts/common.sh@366 -- # decimal 2 00:05:53.528 17:22:26 version -- scripts/common.sh@353 -- # local d=2 00:05:53.528 17:22:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.528 17:22:26 version -- scripts/common.sh@355 -- # echo 2 00:05:53.528 17:22:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.528 17:22:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.528 17:22:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.528 17:22:26 version -- scripts/common.sh@368 -- # return 0 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.528 --rc genhtml_branch_coverage=1 00:05:53.528 --rc genhtml_function_coverage=1 00:05:53.528 --rc genhtml_legend=1 00:05:53.528 --rc geninfo_all_blocks=1 00:05:53.528 --rc geninfo_unexecuted_blocks=1 00:05:53.528 00:05:53.528 ' 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.528 --rc genhtml_branch_coverage=1 00:05:53.528 --rc genhtml_function_coverage=1 00:05:53.528 --rc genhtml_legend=1 00:05:53.528 --rc geninfo_all_blocks=1 00:05:53.528 --rc geninfo_unexecuted_blocks=1 00:05:53.528 00:05:53.528 ' 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.528 --rc genhtml_branch_coverage=1 00:05:53.528 --rc genhtml_function_coverage=1 00:05:53.528 --rc genhtml_legend=1 00:05:53.528 --rc geninfo_all_blocks=1 00:05:53.528 --rc geninfo_unexecuted_blocks=1 00:05:53.528 00:05:53.528 ' 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.528 --rc genhtml_branch_coverage=1 00:05:53.528 --rc genhtml_function_coverage=1 00:05:53.528 --rc genhtml_legend=1 00:05:53.528 --rc geninfo_all_blocks=1 00:05:53.528 --rc geninfo_unexecuted_blocks=1 00:05:53.528 00:05:53.528 ' 00:05:53.528 17:22:26 version -- app/version.sh@17 -- # get_header_version major 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # cut -f2 00:05:53.528 17:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.528 17:22:26 version -- app/version.sh@17 -- # major=25 00:05:53.528 17:22:26 version -- app/version.sh@18 -- # get_header_version minor 00:05:53.528 17:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # cut -f2 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.528 17:22:26 version -- app/version.sh@18 -- # minor=1 00:05:53.528 17:22:26 version -- app/version.sh@19 -- # get_header_version patch 00:05:53.528 17:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # cut -f2 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.528 17:22:26 version -- app/version.sh@19 -- # patch=0 00:05:53.528 17:22:26 version -- app/version.sh@20 -- # get_header_version suffix 00:05:53.528 17:22:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # cut -f2 00:05:53.528 17:22:26 version -- app/version.sh@14 -- # tr -d '"' 00:05:53.528 17:22:26 version -- app/version.sh@20 -- # suffix=-pre 00:05:53.528 17:22:26 version -- app/version.sh@22 -- # version=25.1 00:05:53.528 17:22:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:53.528 17:22:26 version -- app/version.sh@28 -- # version=25.1rc0 00:05:53.528 17:22:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:53.528 17:22:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:53.528 17:22:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:53.528 17:22:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:53.528 ************************************ 00:05:53.528 END TEST version 00:05:53.528 ************************************ 00:05:53.528 00:05:53.528 real 0m0.195s 00:05:53.528 user 0m0.117s 00:05:53.528 sys 0m0.103s 00:05:53.528 17:22:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.528 17:22:26 version -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 17:22:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:53.528 17:22:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:53.528 17:22:26 -- spdk/autotest.sh@194 -- # uname -s 00:05:53.528 17:22:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:53.528 17:22:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:53.528 17:22:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:53.528 17:22:26 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:53.528 17:22:26 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:53.528 17:22:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:53.528 17:22:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.528 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:53.528 ************************************ 00:05:53.528 START TEST blockdev_nvme 00:05:53.528 ************************************ 00:05:53.528 17:22:26 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:53.528 * Looking for test storage... 00:05:53.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.788 17:22:26 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.788 --rc genhtml_branch_coverage=1 00:05:53.788 --rc genhtml_function_coverage=1 00:05:53.788 --rc genhtml_legend=1 00:05:53.788 --rc geninfo_all_blocks=1 00:05:53.788 --rc geninfo_unexecuted_blocks=1 00:05:53.788 00:05:53.788 ' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.788 --rc genhtml_branch_coverage=1 00:05:53.788 --rc genhtml_function_coverage=1 00:05:53.788 --rc genhtml_legend=1 00:05:53.788 --rc geninfo_all_blocks=1 00:05:53.788 --rc geninfo_unexecuted_blocks=1 00:05:53.788 00:05:53.788 ' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.788 --rc genhtml_branch_coverage=1 00:05:53.788 --rc genhtml_function_coverage=1 00:05:53.788 --rc genhtml_legend=1 00:05:53.788 --rc geninfo_all_blocks=1 00:05:53.788 --rc geninfo_unexecuted_blocks=1 00:05:53.788 00:05:53.788 ' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.788 --rc genhtml_branch_coverage=1 00:05:53.788 --rc genhtml_function_coverage=1 00:05:53.788 --rc genhtml_legend=1 00:05:53.788 --rc geninfo_all_blocks=1 00:05:53.788 --rc geninfo_unexecuted_blocks=1 00:05:53.788 00:05:53.788 ' 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.788 17:22:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59787 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59787 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59787 ']' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.788 17:22:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:53.788 17:22:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.788 [2024-12-07 17:22:27.067786] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:53.788 [2024-12-07 17:22:27.068052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:05:54.049 [2024-12-07 17:22:27.221056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.049 [2024-12-07 17:22:27.317991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.616 17:22:27 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.616 17:22:27 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:54.616 17:22:27 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:54.616 17:22:27 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.616 17:22:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.874 17:22:28 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.874 17:22:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:54.874 17:22:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.874 17:22:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.874 17:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:55.134 17:22:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:55.134 17:22:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:55.135 17:22:28 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "26ed2c20-1fb1-47ff-813e-7fc5ab727da0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "26ed2c20-1fb1-47ff-813e-7fc5ab727da0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d78c6841-7bbc-41d7-a2b0-f758010e0d4c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d78c6841-7bbc-41d7-a2b0-f758010e0d4c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7329d56d-34a8-46ee-9b47-0fd8c9624845"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7329d56d-34a8-46ee-9b47-0fd8c9624845",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "02214dcf-4d97-4e39-b7d0-8168b77fcf2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02214dcf-4d97-4e39-b7d0-8168b77fcf2c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "14a830a4-4273-42d3-ba20-09f88d445198"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "14a830a4-4273-42d3-ba20-09f88d445198",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "26821afb-3780-4ed0-ba49-e5f1abd54434"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "26821afb-3780-4ed0-ba49-e5f1abd54434",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:55.135 17:22:28 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:55.135 17:22:28 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:55.135 17:22:28 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:55.135 17:22:28 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59787 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59787 ']' 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59787 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59787 00:05:55.135 killing process with pid 59787 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59787' 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59787 00:05:55.135 17:22:28 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59787 00:05:56.515 17:22:29 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:56.515 17:22:29 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:56.515 17:22:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:56.515 17:22:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.515 17:22:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:56.777 ************************************ 00:05:56.777 START TEST bdev_hello_world 00:05:56.777 ************************************ 00:05:56.777 17:22:29 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:56.777 [2024-12-07 17:22:29.963220] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:56.777 [2024-12-07 17:22:29.963324] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59865 ] 00:05:56.777 [2024-12-07 17:22:30.121383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.036 [2024-12-07 17:22:30.220241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.604 [2024-12-07 17:22:30.759067] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:57.604 [2024-12-07 17:22:30.759114] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:57.604 [2024-12-07 17:22:30.759135] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:57.604 [2024-12-07 17:22:30.761679] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:57.604 [2024-12-07 17:22:30.762522] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:57.604 [2024-12-07 17:22:30.762551] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:57.604 [2024-12-07 17:22:30.763037] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:57.604 00:05:57.604 [2024-12-07 17:22:30.763061] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:58.177 ************************************ 00:05:58.177 END TEST bdev_hello_world 00:05:58.177 ************************************ 00:05:58.177 00:05:58.177 real 0m1.573s 00:05:58.177 user 0m1.296s 00:05:58.177 sys 0m0.166s 00:05:58.177 17:22:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.177 17:22:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:58.177 17:22:31 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:58.177 17:22:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.177 17:22:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.177 17:22:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:58.177 ************************************ 00:05:58.177 START TEST bdev_bounds 00:05:58.177 ************************************ 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59902 00:05:58.177 Process bdevio pid: 59902 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59902' 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59902 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59902 ']' 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.177 17:22:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:58.437 [2024-12-07 17:22:31.603113] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:05:58.437 [2024-12-07 17:22:31.603231] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:05:58.437 [2024-12-07 17:22:31.758287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.697 [2024-12-07 17:22:31.860616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.697 [2024-12-07 17:22:31.861071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.697 [2024-12-07 17:22:31.861247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.268 17:22:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.268 17:22:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:59.268 17:22:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:59.268 I/O targets: 00:05:59.268 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:59.268 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:59.268 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:59.268 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:59.268 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:59.268 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:59.268 00:05:59.268 00:05:59.268 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.268 http://cunit.sourceforge.net/ 00:05:59.268 00:05:59.268 00:05:59.268 Suite: bdevio tests on: Nvme3n1 00:05:59.268 Test: blockdev write read block ...passed 00:05:59.268 Test: blockdev write zeroes read block ...passed 00:05:59.268 Test: blockdev write zeroes read no split ...passed 00:05:59.268 Test: blockdev write zeroes read split ...passed 00:05:59.268 Test: blockdev write zeroes read split partial ...passed 00:05:59.268 Test: blockdev reset ...[2024-12-07 17:22:32.597439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:59.268 [2024-12-07 17:22:32.600346] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:05:59.268 Test: blockdev write read 8 blocks ...uccessful. 00:05:59.268 passed 00:05:59.268 Test: blockdev write read size > 128k ...passed 00:05:59.268 Test: blockdev write read invalid size ...passed 00:05:59.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.268 Test: blockdev write read max offset ...passed 00:05:59.268 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.268 Test: blockdev writev readv 8 blocks ...passed 00:05:59.268 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.268 Test: blockdev writev readv block ...passed 00:05:59.268 Test: blockdev writev readv size > 128k ...passed 00:05:59.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.268 Test: blockdev comparev and writev ...[2024-12-07 17:22:32.622354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b480a000 len:0x1000 00:05:59.268 [2024-12-07 17:22:32.622449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.268 passed 00:05:59.268 Test: blockdev nvme passthru rw ...passed 00:05:59.268 Test: blockdev nvme passthru vendor specific ...[2024-12-07 17:22:32.624620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.268 passed 00:05:59.268 Test: blockdev nvme admin passthru ...[2024-12-07 17:22:32.624668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.268 passed 00:05:59.268 Test: blockdev copy ...passed 00:05:59.268 Suite: bdevio tests on: Nvme2n3 00:05:59.268 Test: blockdev write read block ...passed 00:05:59.268 Test: blockdev write zeroes read block ...passed 00:05:59.268 Test: blockdev write zeroes read no split ...passed 00:05:59.529 Test: blockdev write zeroes read split ...passed 00:05:59.529 Test: blockdev write zeroes read split partial ...passed 00:05:59.529 Test: blockdev reset ...[2024-12-07 17:22:32.679442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:59.529 [2024-12-07 17:22:32.683085] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:59.529 Test: blockdev write read 8 blocks ...uccessful. 00:05:59.529 passed 00:05:59.529 Test: blockdev write read size > 128k ...passed 00:05:59.529 Test: blockdev write read invalid size ...passed 00:05:59.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.529 Test: blockdev write read max offset ...passed 00:05:59.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.529 Test: blockdev writev readv 8 blocks ...passed 00:05:59.529 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.529 Test: blockdev writev readv block ...passed 00:05:59.529 Test: blockdev writev readv size > 128k ...passed 00:05:59.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.529 Test: blockdev comparev and writev ...[2024-12-07 17:22:32.693753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x297a06000 len:0x1000 00:05:59.529 [2024-12-07 17:22:32.693802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev nvme passthru rw ...passed 00:05:59.529 Test: blockdev nvme passthru vendor specific ...passed 00:05:59.529 Test: blockdev nvme admin passthru ...[2024-12-07 17:22:32.694618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.529 [2024-12-07 17:22:32.694648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev copy ...passed 00:05:59.529 Suite: bdevio tests on: Nvme2n2 00:05:59.529 Test: blockdev write read block ...passed 00:05:59.529 Test: blockdev write zeroes read block ...passed 00:05:59.529 Test: blockdev write zeroes read no split ...passed 00:05:59.529 Test: blockdev write zeroes read split ...passed 00:05:59.529 Test: blockdev write zeroes read split partial ...passed 00:05:59.529 Test: blockdev reset ...[2024-12-07 17:22:32.753948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:59.529 [2024-12-07 17:22:32.757868] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:59.529 Test: blockdev write read 8 blocks ...uccessful. 00:05:59.529 passed 00:05:59.529 Test: blockdev write read size > 128k ...passed 00:05:59.529 Test: blockdev write read invalid size ...passed 00:05:59.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.529 Test: blockdev write read max offset ...passed 00:05:59.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.529 Test: blockdev writev readv 8 blocks ...passed 00:05:59.529 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.529 Test: blockdev writev readv block ...passed 00:05:59.529 Test: blockdev writev readv size > 128k ...passed 00:05:59.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.529 Test: blockdev comparev and writev ...[2024-12-07 17:22:32.775767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e3c000 len:0x1000 00:05:59.529 [2024-12-07 17:22:32.775919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev nvme passthru rw ...passed 00:05:59.529 Test: blockdev nvme passthru vendor specific ...[2024-12-07 17:22:32.776698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.529 [2024-12-07 17:22:32.776800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev nvme admin passthru ...passed 00:05:59.529 Test: blockdev copy ...passed 00:05:59.529 Suite: bdevio tests on: Nvme2n1 00:05:59.529 Test: blockdev write read block ...passed 00:05:59.529 Test: blockdev write zeroes read block ...passed 00:05:59.529 Test: blockdev write zeroes read no split ...passed 00:05:59.529 Test: blockdev write zeroes read split ...passed 00:05:59.529 Test: blockdev write zeroes read split partial ...passed 00:05:59.529 Test: blockdev reset ...[2024-12-07 17:22:32.829990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:59.529 [2024-12-07 17:22:32.833720] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:59.529 Test: blockdev write read 8 blocks ...uccessful. 00:05:59.529 passed 00:05:59.529 Test: blockdev write read size > 128k ...passed 00:05:59.529 Test: blockdev write read invalid size ...passed 00:05:59.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.529 Test: blockdev write read max offset ...passed 00:05:59.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.529 Test: blockdev writev readv 8 blocks ...passed 00:05:59.529 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.529 Test: blockdev writev readv block ...passed 00:05:59.529 Test: blockdev writev readv size > 128k ...passed 00:05:59.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.529 Test: blockdev comparev and writev ...[2024-12-07 17:22:32.853013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e38000 len:0x1000 00:05:59.529 [2024-12-07 17:22:32.853055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev nvme passthru rw ...passed 00:05:59.529 Test: blockdev nvme passthru vendor specific ...passed 00:05:59.529 Test: blockdev nvme admin passthru ...[2024-12-07 17:22:32.855326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.529 [2024-12-07 17:22:32.855360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.529 passed 00:05:59.529 Test: blockdev copy ...passed 00:05:59.529 Suite: bdevio tests on: Nvme1n1 00:05:59.529 Test: blockdev write read block ...passed 00:05:59.529 Test: blockdev write zeroes read block ...passed 00:05:59.529 Test: blockdev write zeroes read no split ...passed 00:05:59.529 Test: blockdev write zeroes read split ...passed 00:05:59.790 Test: blockdev write zeroes read split partial ...passed 00:05:59.790 Test: blockdev reset ...[2024-12-07 17:22:32.922333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:59.790 [2024-12-07 17:22:32.926246] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:59.790 Test: blockdev write read 8 blocks ...uccessful. 00:05:59.790 passed 00:05:59.790 Test: blockdev write read size > 128k ...passed 00:05:59.790 Test: blockdev write read invalid size ...passed 00:05:59.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.790 Test: blockdev write read max offset ...passed 00:05:59.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.790 Test: blockdev writev readv 8 blocks ...passed 00:05:59.790 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.790 Test: blockdev writev readv block ...passed 00:05:59.790 Test: blockdev writev readv size > 128k ...passed 00:05:59.790 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.790 Test: blockdev comparev and writev ...[2024-12-07 17:22:32.946149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:05:59.790 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d6e34000 len:0x1000 00:05:59.790 [2024-12-07 17:22:32.946271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:59.790 passed 00:05:59.790 Test: blockdev nvme passthru vendor specific ...[2024-12-07 17:22:32.948295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:59.790 [2024-12-07 17:22:32.948327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:59.790 passed 00:05:59.790 Test: blockdev nvme admin passthru ...passed 00:05:59.790 Test: blockdev copy ...passed 00:05:59.790 Suite: bdevio tests on: Nvme0n1 00:05:59.790 Test: blockdev write read block ...passed 00:05:59.791 Test: blockdev write zeroes read block ...passed 00:05:59.791 Test: blockdev write zeroes read no split ...passed 00:05:59.791 Test: blockdev write zeroes read split ...passed 00:05:59.791 Test: blockdev write zeroes read split partial ...passed 00:05:59.791 Test: blockdev reset ...[2024-12-07 17:22:33.018470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:59.791 [2024-12-07 17:22:33.023924] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:05:59.791 passed 00:05:59.791 Test: blockdev write read 8 blocks ...passed 00:05:59.791 Test: blockdev write read size > 128k ...passed 00:05:59.791 Test: blockdev write read invalid size ...passed 00:05:59.791 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:59.791 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:59.791 Test: blockdev write read max offset ...passed 00:05:59.791 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:59.791 Test: blockdev writev readv 8 blocks ...passed 00:05:59.791 Test: blockdev writev readv 30 x 1block ...passed 00:05:59.791 Test: blockdev writev readv block ...passed 00:05:59.791 Test: blockdev writev readv size > 128k ...passed 00:05:59.791 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:59.791 Test: blockdev comparev and writev ...passed 00:05:59.791 Test: blockdev nvme passthru rw ...[2024-12-07 17:22:33.043704] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:59.791 separate metadata which is not supported yet. 00:05:59.791 passed 00:05:59.791 Test: blockdev nvme passthru vendor specific ...passed 00:05:59.791 Test: blockdev nvme admin passthru ...[2024-12-07 17:22:33.045371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:59.791 [2024-12-07 17:22:33.045414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:59.791 passed 00:05:59.791 Test: blockdev copy ...passed 00:05:59.791 00:05:59.791 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.791 suites 6 6 n/a 0 0 00:05:59.791 tests 138 138 138 0 0 00:05:59.791 asserts 893 893 893 0 n/a 00:05:59.791 00:05:59.791 Elapsed time = 1.283 seconds 00:05:59.791 0 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59902 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59902 ']' 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59902 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59902 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59902' 00:05:59.791 killing process with pid 59902 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59902 00:05:59.791 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59902 00:06:00.732 17:22:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:00.732 00:06:00.732 real 0m2.230s 00:06:00.732 user 0m5.692s 00:06:00.732 sys 0m0.259s 00:06:00.732 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.732 ************************************ 00:06:00.732 17:22:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:00.733 END TEST bdev_bounds 00:06:00.733 ************************************ 00:06:00.733 17:22:33 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:00.733 17:22:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:00.733 17:22:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.733 17:22:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.733 ************************************ 00:06:00.733 START TEST bdev_nbd 00:06:00.733 ************************************ 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59961 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59961 /var/tmp/spdk-nbd.sock 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59961 ']' 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:00.733 17:22:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:00.733 [2024-12-07 17:22:33.899130] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:00.733 [2024-12-07 17:22:33.899242] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.733 [2024-12-07 17:22:34.053253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.993 [2024-12-07 17:22:34.155357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.565 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.825 1+0 records in 00:06:01.825 1+0 records out 00:06:01.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564787 s, 7.3 MB/s 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:01.825 17:22:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:01.825 1+0 records in 00:06:01.825 1+0 records out 00:06:01.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368792 s, 11.1 MB/s 00:06:01.825 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.084 1+0 records in 00:06:02.084 1+0 records out 00:06:02.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079435 s, 5.2 MB/s 00:06:02.084 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.345 1+0 records in 00:06:02.345 1+0 records out 00:06:02.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698772 s, 5.9 MB/s 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.345 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.605 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.606 1+0 records in 00:06:02.606 1+0 records out 00:06:02.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593347 s, 6.9 MB/s 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.606 17:22:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:02.866 1+0 records in 00:06:02.866 1+0 records out 00:06:02.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068434 s, 6.0 MB/s 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:02.866 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd0", 00:06:03.127 "bdev_name": "Nvme0n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd1", 00:06:03.127 "bdev_name": "Nvme1n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd2", 00:06:03.127 "bdev_name": "Nvme2n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd3", 00:06:03.127 "bdev_name": "Nvme2n2" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd4", 00:06:03.127 "bdev_name": "Nvme2n3" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd5", 00:06:03.127 "bdev_name": "Nvme3n1" 00:06:03.127 } 00:06:03.127 ]' 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd0", 00:06:03.127 "bdev_name": "Nvme0n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd1", 00:06:03.127 "bdev_name": "Nvme1n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd2", 00:06:03.127 "bdev_name": "Nvme2n1" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd3", 00:06:03.127 "bdev_name": "Nvme2n2" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd4", 00:06:03.127 "bdev_name": "Nvme2n3" 00:06:03.127 }, 00:06:03.127 { 00:06:03.127 "nbd_device": "/dev/nbd5", 00:06:03.127 "bdev_name": "Nvme3n1" 00:06:03.127 } 00:06:03.127 ]' 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.127 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.387 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.648 17:22:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.908 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.909 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.169 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.430 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.691 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.692 17:22:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:04.952 /dev/nbd0 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:04.952 1+0 records in 00:06:04.952 1+0 records out 00:06:04.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737542 s, 5.6 MB/s 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:04.952 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:05.213 /dev/nbd1 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.213 1+0 records in 00:06:05.213 1+0 records out 00:06:05.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725645 s, 5.6 MB/s 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.213 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:05.471 /dev/nbd10 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.472 1+0 records in 00:06:05.472 1+0 records out 00:06:05.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749173 s, 5.5 MB/s 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.472 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:05.472 /dev/nbd11 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.730 1+0 records in 00:06:05.730 1+0 records out 00:06:05.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474188 s, 8.6 MB/s 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:05.730 17:22:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:05.730 /dev/nbd12 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:05.730 1+0 records in 00:06:05.730 1+0 records out 00:06:05.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591611 s, 6.9 MB/s 00:06:05.730 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:06.002 /dev/nbd13 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.002 1+0 records in 00:06:06.002 1+0 records out 00:06:06.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851905 s, 4.8 MB/s 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.002 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd0", 00:06:06.260 "bdev_name": "Nvme0n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd1", 00:06:06.260 "bdev_name": "Nvme1n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd10", 00:06:06.260 "bdev_name": "Nvme2n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd11", 00:06:06.260 "bdev_name": "Nvme2n2" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd12", 00:06:06.260 "bdev_name": "Nvme2n3" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd13", 00:06:06.260 "bdev_name": "Nvme3n1" 00:06:06.260 } 00:06:06.260 ]' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd0", 00:06:06.260 "bdev_name": "Nvme0n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd1", 00:06:06.260 "bdev_name": "Nvme1n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd10", 00:06:06.260 "bdev_name": "Nvme2n1" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd11", 00:06:06.260 "bdev_name": "Nvme2n2" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd12", 00:06:06.260 "bdev_name": "Nvme2n3" 00:06:06.260 }, 00:06:06.260 { 00:06:06.260 "nbd_device": "/dev/nbd13", 00:06:06.260 "bdev_name": "Nvme3n1" 00:06:06.260 } 00:06:06.260 ]' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.260 /dev/nbd1 00:06:06.260 /dev/nbd10 00:06:06.260 /dev/nbd11 00:06:06.260 /dev/nbd12 00:06:06.260 /dev/nbd13' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.260 /dev/nbd1 00:06:06.260 /dev/nbd10 00:06:06.260 /dev/nbd11 00:06:06.260 /dev/nbd12 00:06:06.260 /dev/nbd13' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.260 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.261 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:06.261 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.261 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:06.261 256+0 records in 00:06:06.261 256+0 records out 00:06:06.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00896706 s, 117 MB/s 00:06:06.261 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.261 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.517 256+0 records in 00:06:06.517 256+0 records out 00:06:06.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0650696 s, 16.1 MB/s 00:06:06.517 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.517 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.517 256+0 records in 00:06:06.517 256+0 records out 00:06:06.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0687042 s, 15.3 MB/s 00:06:06.517 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.517 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:06.517 256+0 records in 00:06:06.517 256+0 records out 00:06:06.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0719495 s, 14.6 MB/s 00:06:06.518 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.518 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:06.775 256+0 records in 00:06:06.775 256+0 records out 00:06:06.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0717938 s, 14.6 MB/s 00:06:06.775 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.775 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:06.775 256+0 records in 00:06:06.775 256+0 records out 00:06:06.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.068574 s, 15.3 MB/s 00:06:06.775 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.775 17:22:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:06.775 256+0 records in 00:06:06.775 256+0 records out 00:06:06.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0684687 s, 15.3 MB/s 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.775 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.776 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.034 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.292 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:07.692 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.693 17:22:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.950 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:08.207 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:08.463 malloc_lvol_verify 00:06:08.463 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:08.721 c8b58388-a8d6-4290-883b-ca8ffb7a16de 00:06:08.721 17:22:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:08.979 72d89f1f-c856-4396-ad30-12ecc3857bd6 00:06:08.979 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:09.238 /dev/nbd0 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:09.238 mke2fs 1.47.0 (5-Feb-2023) 00:06:09.238 Discarding device blocks: 0/4096 done 00:06:09.238 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:09.238 00:06:09.238 Allocating group tables: 0/1 done 00:06:09.238 Writing inode tables: 0/1 done 00:06:09.238 Creating journal (1024 blocks): done 00:06:09.238 Writing superblocks and filesystem accounting information: 0/1 done 00:06:09.238 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.238 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59961 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59961 ']' 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59961 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59961 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.499 killing process with pid 59961 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59961' 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59961 00:06:09.499 17:22:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59961 00:06:10.440 17:22:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:10.440 00:06:10.440 real 0m9.649s 00:06:10.440 user 0m13.813s 00:06:10.440 sys 0m3.059s 00:06:10.440 17:22:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.440 17:22:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:10.440 ************************************ 00:06:10.440 END TEST bdev_nbd 00:06:10.440 ************************************ 00:06:10.440 17:22:43 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:10.440 17:22:43 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:10.440 skipping fio tests on NVMe due to multi-ns failures. 00:06:10.440 17:22:43 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:10.440 17:22:43 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:10.440 17:22:43 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:10.440 17:22:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:10.440 17:22:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.440 17:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.440 ************************************ 00:06:10.440 START TEST bdev_verify 00:06:10.440 ************************************ 00:06:10.440 17:22:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:10.440 [2024-12-07 17:22:43.588202] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:10.440 [2024-12-07 17:22:43.588322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60334 ] 00:06:10.440 [2024-12-07 17:22:43.748730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.701 [2024-12-07 17:22:43.849901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.701 [2024-12-07 17:22:43.849926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.271 Running I/O for 5 seconds... 00:06:13.590 23040.00 IOPS, 90.00 MiB/s [2024-12-07T17:22:47.539Z] 21536.00 IOPS, 84.12 MiB/s [2024-12-07T17:22:48.922Z] 21632.00 IOPS, 84.50 MiB/s [2024-12-07T17:22:49.864Z] 22896.00 IOPS, 89.44 MiB/s [2024-12-07T17:22:49.864Z] 23257.60 IOPS, 90.85 MiB/s 00:06:16.482 Latency(us) 00:06:16.482 [2024-12-07T17:22:49.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.482 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0xbd0bd 00:06:16.482 Nvme0n1 : 5.08 1916.44 7.49 0.00 0.00 66550.84 11947.72 78643.20 00:06:16.482 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:16.482 Nvme0n1 : 5.05 1924.96 7.52 0.00 0.00 66328.96 14115.45 76223.41 00:06:16.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0xa0000 00:06:16.482 Nvme1n1 : 5.08 1915.91 7.48 0.00 0.00 66417.15 15728.64 65334.35 00:06:16.482 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0xa0000 length 0xa0000 00:06:16.482 Nvme1n1 : 5.06 1923.79 7.51 0.00 0.00 66264.50 16131.94 68560.74 00:06:16.482 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0x80000 00:06:16.482 Nvme2n1 : 5.08 1915.38 7.48 0.00 0.00 66262.55 16535.24 61301.37 00:06:16.482 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x80000 length 0x80000 00:06:16.482 Nvme2n1 : 5.06 1923.18 7.51 0.00 0.00 66038.78 17644.31 67754.14 00:06:16.482 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0x80000 00:06:16.482 Nvme2n2 : 5.08 1914.23 7.48 0.00 0.00 66131.69 17341.83 59284.87 00:06:16.482 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x80000 length 0x80000 00:06:16.482 Nvme2n2 : 5.06 1922.68 7.51 0.00 0.00 65900.60 16837.71 66140.95 00:06:16.482 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0x80000 00:06:16.482 Nvme2n3 : 5.08 1913.72 7.48 0.00 0.00 66021.73 13510.50 62914.56 00:06:16.482 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x80000 length 0x80000 00:06:16.482 Nvme2n3 : 5.07 1931.27 7.54 0.00 0.00 65509.98 5368.91 69770.63 00:06:16.482 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x0 length 0x20000 00:06:16.482 Nvme3n1 : 5.09 1922.71 7.51 0.00 0.00 65638.53 4007.78 66947.54 00:06:16.482 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.482 Verification LBA range: start 0x20000 length 0x20000 00:06:16.482 Nvme3n1 : 5.08 1940.07 7.58 0.00 0.00 65157.47 7561.85 72190.42 00:06:16.482 [2024-12-07T17:22:49.864Z] =================================================================================================================== 00:06:16.482 [2024-12-07T17:22:49.864Z] Total : 23064.35 90.10 0.00 0.00 66017.14 4007.78 78643.20 00:06:17.858 00:06:17.858 real 0m7.320s 00:06:17.858 user 0m13.715s 00:06:17.858 sys 0m0.219s 00:06:17.858 17:22:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.858 17:22:50 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.858 ************************************ 00:06:17.858 END TEST bdev_verify 00:06:17.858 ************************************ 00:06:17.858 17:22:50 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:17.858 17:22:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:17.858 17:22:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.858 17:22:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:17.858 ************************************ 00:06:17.858 START TEST bdev_verify_big_io 00:06:17.858 ************************************ 00:06:17.858 17:22:50 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:17.858 [2024-12-07 17:22:50.956914] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:17.858 [2024-12-07 17:22:50.957043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60432 ] 00:06:17.858 [2024-12-07 17:22:51.109492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.858 [2024-12-07 17:22:51.207952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.858 [2024-12-07 17:22:51.207973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.793 Running I/O for 5 seconds... 00:06:24.059 1840.00 IOPS, 115.00 MiB/s [2024-12-07T17:22:57.699Z] 2286.50 IOPS, 142.91 MiB/s [2024-12-07T17:22:58.635Z] 2816.33 IOPS, 176.02 MiB/s 00:06:25.253 Latency(us) 00:06:25.253 [2024-12-07T17:22:58.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.253 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0xbd0b 00:06:25.253 Nvme0n1 : 5.73 89.32 5.58 0.00 0.00 1364181.07 8822.15 1632552.17 00:06:25.253 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:25.253 Nvme0n1 : 5.50 163.00 10.19 0.00 0.00 762301.89 11695.66 777559.43 00:06:25.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0xa000 00:06:25.253 Nvme1n1 : 5.79 92.87 5.80 0.00 0.00 1248425.01 23693.78 2142321.43 00:06:25.253 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0xa000 length 0xa000 00:06:25.253 Nvme1n1 : 5.62 159.44 9.96 0.00 0.00 743766.76 70173.93 706578.90 00:06:25.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0x8000 00:06:25.253 Nvme2n1 : 5.79 96.64 6.04 0.00 0.00 1140248.98 35490.26 2168132.53 00:06:25.253 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x8000 length 0x8000 00:06:25.253 Nvme2n1 : 5.65 169.98 10.62 0.00 0.00 701391.32 25306.98 680767.80 00:06:25.253 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0x8000 00:06:25.253 Nvme2n2 : 5.95 136.77 8.55 0.00 0.00 772001.54 10284.11 1568024.42 00:06:25.253 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x8000 length 0x8000 00:06:25.253 Nvme2n2 : 5.68 176.90 11.06 0.00 0.00 662818.91 26819.35 696899.74 00:06:25.253 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0x8000 00:06:25.253 Nvme2n3 : 6.23 213.66 13.35 0.00 0.00 470801.36 11342.77 2271376.94 00:06:25.253 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x8000 length 0x8000 00:06:25.253 Nvme2n3 : 5.68 176.16 11.01 0.00 0.00 648715.85 27021.00 780785.82 00:06:25.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x0 length 0x2000 00:06:25.253 Nvme3n1 : 6.47 360.63 22.54 0.00 0.00 269336.34 104.76 2297188.04 00:06:25.253 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:25.253 Verification LBA range: start 0x2000 length 0x2000 00:06:25.253 Nvme3n1 : 5.69 183.97 11.50 0.00 0.00 607381.12 1335.93 787238.60 00:06:25.253 [2024-12-07T17:22:58.635Z] =================================================================================================================== 00:06:25.253 [2024-12-07T17:22:58.635Z] Total : 2019.33 126.21 0.00 0.00 661407.76 104.76 2297188.04 00:06:27.162 00:06:27.162 real 0m9.221s 00:06:27.162 user 0m17.519s 00:06:27.162 sys 0m0.232s 00:06:27.162 17:23:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.162 ************************************ 00:06:27.162 END TEST bdev_verify_big_io 00:06:27.162 ************************************ 00:06:27.162 17:23:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:27.162 17:23:00 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.162 17:23:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:27.162 17:23:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.162 17:23:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.162 ************************************ 00:06:27.163 START TEST bdev_write_zeroes 00:06:27.163 ************************************ 00:06:27.163 17:23:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:27.163 [2024-12-07 17:23:00.252117] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:27.163 [2024-12-07 17:23:00.252239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60541 ] 00:06:27.163 [2024-12-07 17:23:00.409356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.163 [2024-12-07 17:23:00.513130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.746 Running I/O for 1 seconds... 00:06:29.121 66816.00 IOPS, 261.00 MiB/s 00:06:29.121 Latency(us) 00:06:29.121 [2024-12-07T17:23:02.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.121 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme0n1 : 1.02 11082.54 43.29 0.00 0.00 11523.57 8570.09 22383.06 00:06:29.121 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme1n1 : 1.02 11069.96 43.24 0.00 0.00 11526.14 8620.50 22080.59 00:06:29.121 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme2n1 : 1.02 11057.23 43.19 0.00 0.00 11508.11 8620.50 21778.12 00:06:29.121 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme2n2 : 1.03 11044.68 43.14 0.00 0.00 11484.24 8570.09 22282.24 00:06:29.121 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme2n3 : 1.03 11032.25 43.09 0.00 0.00 11461.89 8519.68 22080.59 00:06:29.121 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:29.121 Nvme3n1 : 1.03 11019.61 43.05 0.00 0.00 11441.12 6704.84 22080.59 00:06:29.121 [2024-12-07T17:23:02.503Z] =================================================================================================================== 00:06:29.121 [2024-12-07T17:23:02.503Z] Total : 66306.27 259.01 0.00 0.00 11490.84 6704.84 22383.06 00:06:30.062 00:06:30.062 real 0m2.898s 00:06:30.062 user 0m2.585s 00:06:30.062 sys 0m0.197s 00:06:30.062 17:23:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.062 17:23:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:30.062 ************************************ 00:06:30.062 END TEST bdev_write_zeroes 00:06:30.062 ************************************ 00:06:30.062 17:23:03 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.062 17:23:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:30.062 17:23:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.062 17:23:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.062 ************************************ 00:06:30.062 START TEST bdev_json_nonenclosed 00:06:30.062 ************************************ 00:06:30.062 17:23:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.062 [2024-12-07 17:23:03.217352] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:30.062 [2024-12-07 17:23:03.217475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60596 ] 00:06:30.062 [2024-12-07 17:23:03.375785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.322 [2024-12-07 17:23:03.478586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.322 [2024-12-07 17:23:03.478656] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:30.322 [2024-12-07 17:23:03.478673] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:30.322 [2024-12-07 17:23:03.478682] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.322 00:06:30.322 real 0m0.507s 00:06:30.322 user 0m0.313s 00:06:30.322 sys 0m0.089s 00:06:30.322 17:23:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.322 ************************************ 00:06:30.322 17:23:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:30.322 END TEST bdev_json_nonenclosed 00:06:30.322 ************************************ 00:06:30.579 17:23:03 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.579 17:23:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:30.579 17:23:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.579 17:23:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.579 ************************************ 00:06:30.579 START TEST bdev_json_nonarray 00:06:30.579 ************************************ 00:06:30.579 17:23:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:30.579 [2024-12-07 17:23:03.773898] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:30.579 [2024-12-07 17:23:03.774022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60627 ] 00:06:30.579 [2024-12-07 17:23:03.932762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.836 [2024-12-07 17:23:04.032055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.836 [2024-12-07 17:23:04.032138] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:30.836 [2024-12-07 17:23:04.032156] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:30.836 [2024-12-07 17:23:04.032165] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.836 00:06:30.836 real 0m0.500s 00:06:30.836 user 0m0.308s 00:06:30.836 sys 0m0.088s 00:06:30.836 17:23:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.836 17:23:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:30.836 ************************************ 00:06:30.836 END TEST bdev_json_nonarray 00:06:30.836 ************************************ 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:31.120 17:23:04 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:31.120 00:06:31.120 real 0m37.403s 00:06:31.120 user 0m58.369s 00:06:31.120 sys 0m5.028s 00:06:31.120 17:23:04 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.120 17:23:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.120 ************************************ 00:06:31.120 END TEST blockdev_nvme 00:06:31.120 ************************************ 00:06:31.120 17:23:04 -- spdk/autotest.sh@209 -- # uname -s 00:06:31.120 17:23:04 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:31.120 17:23:04 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:31.120 17:23:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.120 17:23:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.120 17:23:04 -- common/autotest_common.sh@10 -- # set +x 00:06:31.120 ************************************ 00:06:31.120 START TEST blockdev_nvme_gpt 00:06:31.120 ************************************ 00:06:31.120 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:31.120 * Looking for test storage... 00:06:31.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.121 17:23:04 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.121 --rc genhtml_branch_coverage=1 00:06:31.121 --rc genhtml_function_coverage=1 00:06:31.121 --rc genhtml_legend=1 00:06:31.121 --rc geninfo_all_blocks=1 00:06:31.121 --rc geninfo_unexecuted_blocks=1 00:06:31.121 00:06:31.121 ' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.121 --rc genhtml_branch_coverage=1 00:06:31.121 --rc genhtml_function_coverage=1 00:06:31.121 --rc genhtml_legend=1 00:06:31.121 --rc geninfo_all_blocks=1 00:06:31.121 --rc geninfo_unexecuted_blocks=1 00:06:31.121 00:06:31.121 ' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.121 --rc genhtml_branch_coverage=1 00:06:31.121 --rc genhtml_function_coverage=1 00:06:31.121 --rc genhtml_legend=1 00:06:31.121 --rc geninfo_all_blocks=1 00:06:31.121 --rc geninfo_unexecuted_blocks=1 00:06:31.121 00:06:31.121 ' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.121 --rc genhtml_branch_coverage=1 00:06:31.121 --rc genhtml_function_coverage=1 00:06:31.121 --rc genhtml_legend=1 00:06:31.121 --rc geninfo_all_blocks=1 00:06:31.121 --rc geninfo_unexecuted_blocks=1 00:06:31.121 00:06:31.121 ' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60700 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60700 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60700 ']' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.121 17:23:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:31.121 17:23:04 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:31.392 [2024-12-07 17:23:04.521897] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:31.392 [2024-12-07 17:23:04.522023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60700 ] 00:06:31.392 [2024-12-07 17:23:04.680920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.650 [2024-12-07 17:23:04.778720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.220 17:23:05 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.220 17:23:05 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:32.220 17:23:05 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:32.220 17:23:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:32.220 17:23:05 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:32.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:32.480 Waiting for block devices as requested 00:06:32.480 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:32.739 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:32.739 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:32.740 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:38.027 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:38.027 BYT; 00:06:38.027 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:38.027 BYT; 00:06:38.027 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:38.027 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:38.027 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:38.028 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:38.028 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:38.028 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:38.028 17:23:11 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:38.028 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:38.028 17:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:38.962 The operation has completed successfully. 00:06:38.962 17:23:12 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:40.335 The operation has completed successfully. 00:06:40.335 17:23:13 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:40.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.900 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.900 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.900 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.900 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:40.900 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:40.900 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.900 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.900 [] 00:06:40.900 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.900 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:40.900 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:40.900 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:40.900 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:41.158 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:41.158 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.158 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:41.420 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:41.421 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "cbf7a9e0-bb57-4aa3-8dc2-ecbe18500f8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cbf7a9e0-bb57-4aa3-8dc2-ecbe18500f8a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "85898938-7c15-4c3b-8974-0b8ea9ba54ef"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85898938-7c15-4c3b-8974-0b8ea9ba54ef",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1f7b250c-e82c-4389-ab3a-646de5d05a5a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1f7b250c-e82c-4389-ab3a-646de5d05a5a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a056c5ca-3042-4d26-98e4-d012dd35abd3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a056c5ca-3042-4d26-98e4-d012dd35abd3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "234ed127-649d-4b9b-853d-4b35d48452ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "234ed127-649d-4b9b-853d-4b35d48452ee",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:41.421 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:41.421 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:41.421 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:41.421 17:23:14 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60700 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60700 ']' 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60700 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60700 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.421 killing process with pid 60700 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60700' 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60700 00:06:41.421 17:23:14 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60700 00:06:43.364 17:23:16 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:43.364 17:23:16 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:43.364 17:23:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:43.364 17:23:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.364 17:23:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:43.364 ************************************ 00:06:43.364 START TEST bdev_hello_world 00:06:43.364 ************************************ 00:06:43.364 17:23:16 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:43.364 [2024-12-07 17:23:16.347481] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:43.364 [2024-12-07 17:23:16.347740] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:06:43.364 [2024-12-07 17:23:16.508133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.364 [2024-12-07 17:23:16.603457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.936 [2024-12-07 17:23:17.146773] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:43.936 [2024-12-07 17:23:17.146816] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:43.936 [2024-12-07 17:23:17.146838] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:43.936 [2024-12-07 17:23:17.149236] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:43.936 [2024-12-07 17:23:17.149965] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:43.936 [2024-12-07 17:23:17.150012] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:43.936 [2024-12-07 17:23:17.150527] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:43.936 00:06:43.936 [2024-12-07 17:23:17.150558] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:44.505 00:06:44.505 real 0m1.580s 00:06:44.505 user 0m1.305s 00:06:44.505 sys 0m0.167s 00:06:44.505 17:23:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.505 ************************************ 00:06:44.505 END TEST bdev_hello_world 00:06:44.505 ************************************ 00:06:44.505 17:23:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:44.765 17:23:17 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:44.765 17:23:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.765 17:23:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.765 17:23:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.765 ************************************ 00:06:44.765 START TEST bdev_bounds 00:06:44.765 ************************************ 00:06:44.765 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:44.765 17:23:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61367 00:06:44.766 Process bdevio pid: 61367 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61367' 00:06:44.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61367 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61367 ']' 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:44.766 17:23:17 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:44.766 [2024-12-07 17:23:17.990854] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:44.766 [2024-12-07 17:23:17.990970] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:06:45.026 [2024-12-07 17:23:18.150127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.027 [2024-12-07 17:23:18.249821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.027 [2024-12-07 17:23:18.250479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.027 [2024-12-07 17:23:18.250534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.598 17:23:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.598 17:23:18 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:45.598 17:23:18 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:45.598 I/O targets: 00:06:45.598 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:45.598 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:45.598 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:45.598 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.598 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.598 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:45.598 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:45.598 00:06:45.598 00:06:45.598 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.598 http://cunit.sourceforge.net/ 00:06:45.598 00:06:45.598 00:06:45.598 Suite: bdevio tests on: Nvme3n1 00:06:45.598 Test: blockdev write read block ...passed 00:06:45.598 Test: blockdev write zeroes read block ...passed 00:06:45.598 Test: blockdev write zeroes read no split ...passed 00:06:45.598 Test: blockdev write zeroes read split ...passed 00:06:45.598 Test: blockdev write zeroes read split partial ...passed 00:06:45.598 Test: blockdev reset ...[2024-12-07 17:23:18.974925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:45.859 [2024-12-07 17:23:18.979589] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:45.859 passed 00:06:45.859 Test: blockdev write read 8 blocks ...passed 00:06:45.859 Test: blockdev write read size > 128k ...passed 00:06:45.859 Test: blockdev write read invalid size ...passed 00:06:45.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.859 Test: blockdev write read max offset ...passed 00:06:45.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.859 Test: blockdev writev readv 8 blocks ...passed 00:06:45.859 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.859 Test: blockdev writev readv block ...passed 00:06:45.859 Test: blockdev writev readv size > 128k ...passed 00:06:45.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.860 Test: blockdev comparev and writev ...[2024-12-07 17:23:18.997503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2004000 len:0x1000 00:06:45.860 [2024-12-07 17:23:18.997549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev nvme passthru rw ...passed 00:06:45.860 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.860 Test: blockdev nvme admin passthru ...[2024-12-07 17:23:18.999140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.860 [2024-12-07 17:23:18.999171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev copy ...passed 00:06:45.860 Suite: bdevio tests on: Nvme2n3 00:06:45.860 Test: blockdev write read block ...passed 00:06:45.860 Test: blockdev write zeroes read block ...passed 00:06:45.860 Test: blockdev write zeroes read no split ...passed 00:06:45.860 Test: blockdev write zeroes read split ...passed 00:06:45.860 Test: blockdev write zeroes read split partial ...passed 00:06:45.860 Test: blockdev reset ...[2024-12-07 17:23:19.057907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:45.860 [2024-12-07 17:23:19.062451] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:45.860 passed 00:06:45.860 Test: blockdev write read 8 blocks ...passed 00:06:45.860 Test: blockdev write read size > 128k ...passed 00:06:45.860 Test: blockdev write read invalid size ...passed 00:06:45.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.860 Test: blockdev write read max offset ...passed 00:06:45.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.860 Test: blockdev writev readv 8 blocks ...passed 00:06:45.860 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.860 Test: blockdev writev readv block ...passed 00:06:45.860 Test: blockdev writev readv size > 128k ...passed 00:06:45.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.860 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.077865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b2002000 len:0x1000 00:06:45.860 [2024-12-07 17:23:19.077903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev nvme passthru rw ...passed 00:06:45.860 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.860 Test: blockdev nvme admin passthru ...[2024-12-07 17:23:19.079762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.860 [2024-12-07 17:23:19.079793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev copy ...passed 00:06:45.860 Suite: bdevio tests on: Nvme2n2 00:06:45.860 Test: blockdev write read block ...passed 00:06:45.860 Test: blockdev write zeroes read block ...passed 00:06:45.860 Test: blockdev write zeroes read no split ...passed 00:06:45.860 Test: blockdev write zeroes read split ...passed 00:06:45.860 Test: blockdev write zeroes read split partial ...passed 00:06:45.860 Test: blockdev reset ...[2024-12-07 17:23:19.140232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:45.860 [2024-12-07 17:23:19.144734] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:45.860 passed 00:06:45.860 Test: blockdev write read 8 blocks ...passed 00:06:45.860 Test: blockdev write read size > 128k ...passed 00:06:45.860 Test: blockdev write read invalid size ...passed 00:06:45.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.860 Test: blockdev write read max offset ...passed 00:06:45.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.860 Test: blockdev writev readv 8 blocks ...passed 00:06:45.860 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.860 Test: blockdev writev readv block ...passed 00:06:45.860 Test: blockdev writev readv size > 128k ...passed 00:06:45.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:45.860 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.159842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:06:45.860 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d8c38000 len:0x1000 00:06:45.860 [2024-12-07 17:23:19.159967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev nvme passthru vendor specific ...passed 00:06:45.860 Test: blockdev nvme admin passthru ...[2024-12-07 17:23:19.162354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:45.860 [2024-12-07 17:23:19.162465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:45.860 passed 00:06:45.860 Test: blockdev copy ...passed 00:06:45.860 Suite: bdevio tests on: Nvme2n1 00:06:45.860 Test: blockdev write read block ...passed 00:06:45.860 Test: blockdev write zeroes read block ...passed 00:06:45.860 Test: blockdev write zeroes read no split ...passed 00:06:45.860 Test: blockdev write zeroes read split ...passed 00:06:45.860 Test: blockdev write zeroes read split partial ...passed 00:06:45.860 Test: blockdev reset ...[2024-12-07 17:23:19.221632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:45.860 [2024-12-07 17:23:19.225152] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:45.860 passed 00:06:45.860 Test: blockdev write read 8 blocks ...passed 00:06:45.860 Test: blockdev write read size > 128k ...passed 00:06:45.860 Test: blockdev write read invalid size ...passed 00:06:45.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:45.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:45.860 Test: blockdev write read max offset ...passed 00:06:45.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:45.860 Test: blockdev writev readv 8 blocks ...passed 00:06:45.860 Test: blockdev writev readv 30 x 1block ...passed 00:06:45.860 Test: blockdev writev readv block ...passed 00:06:46.121 Test: blockdev writev readv size > 128k ...passed 00:06:46.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.121 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.245526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8c34000 len:0x1000 00:06:46.121 [2024-12-07 17:23:19.245620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.121 passed 00:06:46.121 Test: blockdev nvme passthru rw ...passed 00:06:46.121 Test: blockdev nvme passthru vendor specific ...[2024-12-07 17:23:19.247767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:46.121 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:46.121 [2024-12-07 17:23:19.247943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:46.121 passed 00:06:46.121 Test: blockdev copy ...passed 00:06:46.121 Suite: bdevio tests on: Nvme1n1p2 00:06:46.121 Test: blockdev write read block ...passed 00:06:46.121 Test: blockdev write zeroes read block ...passed 00:06:46.121 Test: blockdev write zeroes read no split ...passed 00:06:46.121 Test: blockdev write zeroes read split ...passed 00:06:46.121 Test: blockdev write zeroes read split partial ...passed 00:06:46.121 Test: blockdev reset ...[2024-12-07 17:23:19.306489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:46.121 [2024-12-07 17:23:19.309747] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:46.121 passed 00:06:46.121 Test: blockdev write read 8 blocks ...passed 00:06:46.121 Test: blockdev write read size > 128k ...passed 00:06:46.121 Test: blockdev write read invalid size ...passed 00:06:46.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.121 Test: blockdev write read max offset ...passed 00:06:46.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.121 Test: blockdev writev readv 8 blocks ...passed 00:06:46.121 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.121 Test: blockdev writev readv block ...passed 00:06:46.121 Test: blockdev writev readv size > 128k ...passed 00:06:46.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.121 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.325595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d8c30000 len:0x1000 00:06:46.121 [2024-12-07 17:23:19.325640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.121 passed 00:06:46.121 Test: blockdev nvme passthru rw ...passed 00:06:46.121 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.121 Test: blockdev nvme admin passthru ...passed 00:06:46.121 Test: blockdev copy ...passed 00:06:46.121 Suite: bdevio tests on: Nvme1n1p1 00:06:46.121 Test: blockdev write read block ...passed 00:06:46.121 Test: blockdev write zeroes read block ...passed 00:06:46.121 Test: blockdev write zeroes read no split ...passed 00:06:46.121 Test: blockdev write zeroes read split ...passed 00:06:46.121 Test: blockdev write zeroes read split partial ...passed 00:06:46.121 Test: blockdev reset ...[2024-12-07 17:23:19.386450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:46.121 [2024-12-07 17:23:19.390203] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:46.121 Test: blockdev write read 8 blocks ...uccessful. 00:06:46.121 passed 00:06:46.121 Test: blockdev write read size > 128k ...passed 00:06:46.121 Test: blockdev write read invalid size ...passed 00:06:46.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.121 Test: blockdev write read max offset ...passed 00:06:46.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.121 Test: blockdev writev readv 8 blocks ...passed 00:06:46.121 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.121 Test: blockdev writev readv block ...passed 00:06:46.121 Test: blockdev writev readv size > 128k ...passed 00:06:46.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.121 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.408447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b2a0e000 len:0x1000 00:06:46.121 [2024-12-07 17:23:19.408668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:46.121 passed 00:06:46.121 Test: blockdev nvme passthru rw ...passed 00:06:46.121 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.121 Test: blockdev nvme admin passthru ...passed 00:06:46.121 Test: blockdev copy ...passed 00:06:46.121 Suite: bdevio tests on: Nvme0n1 00:06:46.121 Test: blockdev write read block ...passed 00:06:46.121 Test: blockdev write zeroes read block ...passed 00:06:46.121 Test: blockdev write zeroes read no split ...passed 00:06:46.121 Test: blockdev write zeroes read split ...passed 00:06:46.121 Test: blockdev write zeroes read split partial ...passed 00:06:46.121 Test: blockdev reset ...[2024-12-07 17:23:19.462890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:46.121 [2024-12-07 17:23:19.466789] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:46.121 Test: blockdev write read 8 blocks ...uccessful. 00:06:46.121 passed 00:06:46.121 Test: blockdev write read size > 128k ...passed 00:06:46.121 Test: blockdev write read invalid size ...passed 00:06:46.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:46.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:46.121 Test: blockdev write read max offset ...passed 00:06:46.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:46.121 Test: blockdev writev readv 8 blocks ...passed 00:06:46.121 Test: blockdev writev readv 30 x 1block ...passed 00:06:46.121 Test: blockdev writev readv block ...passed 00:06:46.121 Test: blockdev writev readv size > 128k ...passed 00:06:46.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:46.121 Test: blockdev comparev and writev ...[2024-12-07 17:23:19.483005] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:46.121 separate metadata which is not supported yet. 00:06:46.121 passed 00:06:46.121 Test: blockdev nvme passthru rw ...passed 00:06:46.121 Test: blockdev nvme passthru vendor specific ...passed 00:06:46.121 Test: blockdev nvme admin passthru ...[2024-12-07 17:23:19.484359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:46.121 [2024-12-07 17:23:19.484400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:46.121 passed 00:06:46.121 Test: blockdev copy ...passed 00:06:46.122 00:06:46.122 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.122 suites 7 7 n/a 0 0 00:06:46.122 tests 161 161 161 0 0 00:06:46.122 asserts 1025 1025 1025 0 n/a 00:06:46.122 00:06:46.122 Elapsed time = 1.438 seconds 00:06:46.122 0 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61367 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61367 ']' 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61367 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61367 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61367' 00:06:46.402 killing process with pid 61367 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61367 00:06:46.402 17:23:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61367 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:46.975 00:06:46.975 real 0m2.292s 00:06:46.975 user 0m5.789s 00:06:46.975 sys 0m0.289s 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:46.975 ************************************ 00:06:46.975 END TEST bdev_bounds 00:06:46.975 ************************************ 00:06:46.975 17:23:20 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.975 17:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:46.975 17:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.975 17:23:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:46.975 ************************************ 00:06:46.975 START TEST bdev_nbd 00:06:46.975 ************************************ 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:46.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61426 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61426 /var/tmp/spdk-nbd.sock 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61426 ']' 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:46.975 17:23:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:46.975 [2024-12-07 17:23:20.341308] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:46.975 [2024-12-07 17:23:20.341427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.235 [2024-12-07 17:23:20.503858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.235 [2024-12-07 17:23:20.605819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:47.816 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.086 1+0 records in 00:06:48.086 1+0 records out 00:06:48.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465616 s, 8.8 MB/s 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.086 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.346 1+0 records in 00:06:48.346 1+0 records out 00:06:48.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781662 s, 5.2 MB/s 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.346 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.608 1+0 records in 00:06:48.608 1+0 records out 00:06:48.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063242 s, 6.5 MB/s 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.608 17:23:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:48.869 1+0 records in 00:06:48.869 1+0 records out 00:06:48.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668674 s, 6.1 MB/s 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:48.869 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.131 1+0 records in 00:06:49.131 1+0 records out 00:06:49.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594677 s, 6.9 MB/s 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:49.131 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.393 1+0 records in 00:06:49.393 1+0 records out 00:06:49.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671093 s, 6.1 MB/s 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.393 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:49.394 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.654 1+0 records in 00:06:49.654 1+0 records out 00:06:49.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000878865 s, 4.7 MB/s 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:49.654 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:49.655 17:23:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd0", 00:06:49.916 "bdev_name": "Nvme0n1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd1", 00:06:49.916 "bdev_name": "Nvme1n1p1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd2", 00:06:49.916 "bdev_name": "Nvme1n1p2" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd3", 00:06:49.916 "bdev_name": "Nvme2n1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd4", 00:06:49.916 "bdev_name": "Nvme2n2" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd5", 00:06:49.916 "bdev_name": "Nvme2n3" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd6", 00:06:49.916 "bdev_name": "Nvme3n1" 00:06:49.916 } 00:06:49.916 ]' 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd0", 00:06:49.916 "bdev_name": "Nvme0n1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd1", 00:06:49.916 "bdev_name": "Nvme1n1p1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd2", 00:06:49.916 "bdev_name": "Nvme1n1p2" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd3", 00:06:49.916 "bdev_name": "Nvme2n1" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd4", 00:06:49.916 "bdev_name": "Nvme2n2" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd5", 00:06:49.916 "bdev_name": "Nvme2n3" 00:06:49.916 }, 00:06:49.916 { 00:06:49.916 "nbd_device": "/dev/nbd6", 00:06:49.916 "bdev_name": "Nvme3n1" 00:06:49.916 } 00:06:49.916 ]' 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.916 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.176 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.438 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.698 17:23:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.958 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.219 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.220 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.220 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.220 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.478 17:23:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:51.751 /dev/nbd0 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.751 1+0 records in 00:06:51.751 1+0 records out 00:06:51.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499059 s, 8.2 MB/s 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:51.751 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:52.038 /dev/nbd1 00:06:52.038 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.039 1+0 records in 00:06:52.039 1+0 records out 00:06:52.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347962 s, 11.8 MB/s 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.039 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:52.297 /dev/nbd10 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.297 1+0 records in 00:06:52.297 1+0 records out 00:06:52.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486449 s, 8.4 MB/s 00:06:52.297 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.298 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:52.556 /dev/nbd11 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.556 1+0 records in 00:06:52.556 1+0 records out 00:06:52.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004495 s, 9.1 MB/s 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.556 17:23:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:52.813 /dev/nbd12 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.813 1+0 records in 00:06:52.813 1+0 records out 00:06:52.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473674 s, 8.6 MB/s 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.813 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.814 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:52.814 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:53.071 /dev/nbd13 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.071 1+0 records in 00:06:53.071 1+0 records out 00:06:53.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303428 s, 13.5 MB/s 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:53.071 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:53.329 /dev/nbd14 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.330 1+0 records in 00:06:53.330 1+0 records out 00:06:53.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528146 s, 7.8 MB/s 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd0", 00:06:53.330 "bdev_name": "Nvme0n1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd1", 00:06:53.330 "bdev_name": "Nvme1n1p1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd10", 00:06:53.330 "bdev_name": "Nvme1n1p2" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd11", 00:06:53.330 "bdev_name": "Nvme2n1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd12", 00:06:53.330 "bdev_name": "Nvme2n2" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd13", 00:06:53.330 "bdev_name": "Nvme2n3" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd14", 00:06:53.330 "bdev_name": "Nvme3n1" 00:06:53.330 } 00:06:53.330 ]' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd0", 00:06:53.330 "bdev_name": "Nvme0n1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd1", 00:06:53.330 "bdev_name": "Nvme1n1p1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd10", 00:06:53.330 "bdev_name": "Nvme1n1p2" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd11", 00:06:53.330 "bdev_name": "Nvme2n1" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd12", 00:06:53.330 "bdev_name": "Nvme2n2" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd13", 00:06:53.330 "bdev_name": "Nvme2n3" 00:06:53.330 }, 00:06:53.330 { 00:06:53.330 "nbd_device": "/dev/nbd14", 00:06:53.330 "bdev_name": "Nvme3n1" 00:06:53.330 } 00:06:53.330 ]' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.330 /dev/nbd1 00:06:53.330 /dev/nbd10 00:06:53.330 /dev/nbd11 00:06:53.330 /dev/nbd12 00:06:53.330 /dev/nbd13 00:06:53.330 /dev/nbd14' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.330 /dev/nbd1 00:06:53.330 /dev/nbd10 00:06:53.330 /dev/nbd11 00:06:53.330 /dev/nbd12 00:06:53.330 /dev/nbd13 00:06:53.330 /dev/nbd14' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:53.330 256+0 records in 00:06:53.330 256+0 records out 00:06:53.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850991 s, 123 MB/s 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.330 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.588 256+0 records in 00:06:53.588 256+0 records out 00:06:53.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0703316 s, 14.9 MB/s 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.588 256+0 records in 00:06:53.588 256+0 records out 00:06:53.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0795859 s, 13.2 MB/s 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:53.588 256+0 records in 00:06:53.588 256+0 records out 00:06:53.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0903825 s, 11.6 MB/s 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.588 17:23:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:53.847 256+0 records in 00:06:53.847 256+0 records out 00:06:53.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0785022 s, 13.4 MB/s 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:53.847 256+0 records in 00:06:53.847 256+0 records out 00:06:53.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0748064 s, 14.0 MB/s 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:53.847 256+0 records in 00:06:53.847 256+0 records out 00:06:53.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741681 s, 14.1 MB/s 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.847 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:54.108 256+0 records in 00:06:54.108 256+0 records out 00:06:54.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745306 s, 14.1 MB/s 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.108 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.370 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.631 17:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.891 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.150 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.411 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.672 17:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.672 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.672 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.672 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:55.934 malloc_lvol_verify 00:06:55.934 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:56.196 d2c04831-9d09-47f2-aa4a-06bef5708753 00:06:56.196 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:56.458 4ee7af7f-a4eb-493d-88e4-24c7f6611aae 00:06:56.458 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:56.719 /dev/nbd0 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:56.719 mke2fs 1.47.0 (5-Feb-2023) 00:06:56.719 Discarding device blocks: 0/4096 done 00:06:56.719 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:56.719 00:06:56.719 Allocating group tables: 0/1 done 00:06:56.719 Writing inode tables: 0/1 done 00:06:56.719 Creating journal (1024 blocks): done 00:06:56.719 Writing superblocks and filesystem accounting information: 0/1 done 00:06:56.719 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.719 17:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61426 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61426 ']' 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61426 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61426 00:06:56.981 killing process with pid 61426 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61426' 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61426 00:06:56.981 17:23:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61426 00:06:57.925 ************************************ 00:06:57.925 END TEST bdev_nbd 00:06:57.925 ************************************ 00:06:57.925 17:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:57.925 00:06:57.925 real 0m10.798s 00:06:57.925 user 0m15.373s 00:06:57.925 sys 0m3.567s 00:06:57.925 17:23:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.925 17:23:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:57.925 skipping fio tests on NVMe due to multi-ns failures. 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.925 17:23:31 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.925 17:23:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:57.925 17:23:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.925 17:23:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.925 ************************************ 00:06:57.925 START TEST bdev_verify 00:06:57.925 ************************************ 00:06:57.926 17:23:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:57.926 [2024-12-07 17:23:31.197591] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:06:57.926 [2024-12-07 17:23:31.197737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61838 ] 00:06:58.190 [2024-12-07 17:23:31.360342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.190 [2024-12-07 17:23:31.458929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.190 [2024-12-07 17:23:31.458946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.761 Running I/O for 5 seconds... 00:07:01.069 24576.00 IOPS, 96.00 MiB/s [2024-12-07T17:23:35.385Z] 24384.00 IOPS, 95.25 MiB/s [2024-12-07T17:23:36.316Z] 24106.67 IOPS, 94.17 MiB/s [2024-12-07T17:23:37.250Z] 23520.00 IOPS, 91.88 MiB/s [2024-12-07T17:23:37.250Z] 23500.80 IOPS, 91.80 MiB/s 00:07:03.868 Latency(us) 00:07:03.868 [2024-12-07T17:23:37.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:03.868 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0xbd0bd 00:07:03.868 Nvme0n1 : 5.05 1623.61 6.34 0.00 0.00 78549.37 17745.13 79853.10 00:07:03.868 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:03.868 Nvme0n1 : 5.04 1674.59 6.54 0.00 0.00 76150.85 16837.71 75416.81 00:07:03.868 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x4ff80 00:07:03.868 Nvme1n1p1 : 5.05 1622.95 6.34 0.00 0.00 78388.62 19660.80 68560.74 00:07:03.868 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:03.868 Nvme1n1p1 : 5.07 1680.18 6.56 0.00 0.00 75775.27 8570.09 68157.44 00:07:03.868 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x4ff7f 00:07:03.868 Nvme1n1p2 : 5.07 1628.54 6.36 0.00 0.00 78015.90 7410.61 64124.46 00:07:03.868 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:03.868 Nvme1n1p2 : 5.07 1679.58 6.56 0.00 0.00 75686.80 8570.09 61704.66 00:07:03.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x80000 00:07:03.868 Nvme2n1 : 5.07 1627.26 6.36 0.00 0.00 77883.08 10435.35 59284.87 00:07:03.868 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x80000 length 0x80000 00:07:03.868 Nvme2n1 : 5.07 1679.07 6.56 0.00 0.00 75597.90 8872.57 58478.28 00:07:03.868 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x80000 00:07:03.868 Nvme2n2 : 5.09 1635.20 6.39 0.00 0.00 77496.65 9779.99 61704.66 00:07:03.868 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x80000 length 0x80000 00:07:03.868 Nvme2n2 : 5.07 1677.80 6.55 0.00 0.00 75489.29 11141.12 59284.87 00:07:03.868 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x80000 00:07:03.868 Nvme2n3 : 5.09 1634.75 6.39 0.00 0.00 77335.67 9679.16 63721.16 00:07:03.868 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x80000 length 0x80000 00:07:03.868 Nvme2n3 : 5.08 1686.55 6.59 0.00 0.00 75131.20 7158.55 60898.07 00:07:03.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x0 length 0x20000 00:07:03.868 Nvme3n1 : 5.09 1634.31 6.38 0.00 0.00 77243.50 9578.34 65737.65 00:07:03.868 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:03.868 Verification LBA range: start 0x20000 length 0x20000 00:07:03.868 Nvme3n1 : 5.09 1686.05 6.59 0.00 0.00 75007.00 7360.20 62914.56 00:07:03.868 [2024-12-07T17:23:37.250Z] =================================================================================================================== 00:07:03.868 [2024-12-07T17:23:37.250Z] Total : 23170.43 90.51 0.00 0.00 76676.67 7158.55 79853.10 00:07:05.237 00:07:05.237 real 0m7.344s 00:07:05.237 user 0m13.780s 00:07:05.237 sys 0m0.213s 00:07:05.237 17:23:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.237 17:23:38 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.237 ************************************ 00:07:05.237 END TEST bdev_verify 00:07:05.237 ************************************ 00:07:05.237 17:23:38 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.237 17:23:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:05.237 17:23:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.237 17:23:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.237 ************************************ 00:07:05.237 START TEST bdev_verify_big_io 00:07:05.237 ************************************ 00:07:05.237 17:23:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:05.237 [2024-12-07 17:23:38.579916] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:05.237 [2024-12-07 17:23:38.580039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:07:05.495 [2024-12-07 17:23:38.740961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.495 [2024-12-07 17:23:38.837629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.495 [2024-12-07 17:23:38.837640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.428 Running I/O for 5 seconds... 00:07:12.282 1304.00 IOPS, 81.50 MiB/s [2024-12-07T17:23:45.664Z] 2326.50 IOPS, 145.41 MiB/s [2024-12-07T17:23:46.598Z] 2845.00 IOPS, 177.81 MiB/s [2024-12-07T17:23:46.598Z] 2890.25 IOPS, 180.64 MiB/s 00:07:13.216 Latency(us) 00:07:13.216 [2024-12-07T17:23:46.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.216 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0xbd0b 00:07:13.216 Nvme0n1 : 5.76 79.14 4.95 0.00 0.00 1511754.11 29239.14 2193943.63 00:07:13.216 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:13.216 Nvme0n1 : 6.04 97.93 6.12 0.00 0.00 1247479.63 13510.50 1780966.01 00:07:13.216 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x4ff8 00:07:13.216 Nvme1n1p1 : 6.07 87.83 5.49 0.00 0.00 1289728.62 51218.90 1987454.82 00:07:13.216 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:13.216 Nvme1n1p1 : 5.97 124.20 7.76 0.00 0.00 961369.77 68560.74 942105.21 00:07:13.216 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x4ff7 00:07:13.216 Nvme1n1p2 : 6.12 92.17 5.76 0.00 0.00 1154139.37 57268.38 2013265.92 00:07:13.216 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:13.216 Nvme1n1p2 : 5.98 125.16 7.82 0.00 0.00 923697.97 70577.23 890483.00 00:07:13.216 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x8000 00:07:13.216 Nvme2n1 : 6.26 110.09 6.88 0.00 0.00 928125.56 35490.26 2039077.02 00:07:13.216 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x8000 length 0x8000 00:07:13.216 Nvme2n1 : 5.89 125.03 7.81 0.00 0.00 901743.03 71383.83 935652.43 00:07:13.216 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x8000 00:07:13.216 Nvme2n2 : 6.45 145.20 9.08 0.00 0.00 671174.56 25306.98 1871304.86 00:07:13.216 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x8000 length 0x8000 00:07:13.216 Nvme2n2 : 5.98 128.45 8.03 0.00 0.00 855532.57 88725.66 948557.98 00:07:13.216 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x8000 00:07:13.216 Nvme2n3 : 6.71 203.16 12.70 0.00 0.00 457788.12 7864.32 2103604.78 00:07:13.216 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x8000 length 0x8000 00:07:13.216 Nvme2n3 : 6.04 137.52 8.59 0.00 0.00 784714.04 24702.03 1051802.39 00:07:13.216 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x0 length 0x2000 00:07:13.216 Nvme3n1 : 7.00 332.23 20.76 0.00 0.00 269848.58 768.79 1742249.35 00:07:13.216 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.216 Verification LBA range: start 0x2000 length 0x2000 00:07:13.216 Nvme3n1 : 6.05 148.10 9.26 0.00 0.00 712230.06 2041.70 987274.63 00:07:13.216 [2024-12-07T17:23:46.598Z] =================================================================================================================== 00:07:13.216 [2024-12-07T17:23:46.598Z] Total : 1936.21 121.01 0.00 0.00 763686.67 768.79 2193943.63 00:07:15.119 00:07:15.119 real 0m9.639s 00:07:15.119 user 0m18.323s 00:07:15.119 sys 0m0.250s 00:07:15.119 17:23:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.119 ************************************ 00:07:15.119 END TEST bdev_verify_big_io 00:07:15.119 ************************************ 00:07:15.119 17:23:48 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:15.119 17:23:48 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:15.119 17:23:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:15.119 17:23:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.119 17:23:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.119 ************************************ 00:07:15.119 START TEST bdev_write_zeroes 00:07:15.119 ************************************ 00:07:15.119 17:23:48 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:15.119 [2024-12-07 17:23:48.289422] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:15.119 [2024-12-07 17:23:48.289554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62056 ] 00:07:15.119 [2024-12-07 17:23:48.451192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.380 [2024-12-07 17:23:48.554930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.951 Running I/O for 1 seconds... 00:07:16.887 66304.00 IOPS, 259.00 MiB/s 00:07:16.887 Latency(us) 00:07:16.887 [2024-12-07T17:23:50.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.888 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme0n1 : 1.02 9438.97 36.87 0.00 0.00 13529.54 10838.65 24399.56 00:07:16.888 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme1n1p1 : 1.03 9427.36 36.83 0.00 0.00 13528.19 10536.17 25004.50 00:07:16.888 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme1n1p2 : 1.03 9415.89 36.78 0.00 0.00 13506.39 10889.06 23693.78 00:07:16.888 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme2n1 : 1.03 9405.29 36.74 0.00 0.00 13494.88 11040.30 23088.84 00:07:16.888 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme2n2 : 1.03 9394.69 36.70 0.00 0.00 13491.21 11090.71 22685.54 00:07:16.888 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme2n3 : 1.03 9384.09 36.66 0.00 0.00 13489.26 11141.12 22887.19 00:07:16.888 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.888 Nvme3n1 : 1.03 9373.58 36.62 0.00 0.00 13467.31 10435.35 24500.38 00:07:16.888 [2024-12-07T17:23:50.270Z] =================================================================================================================== 00:07:16.888 [2024-12-07T17:23:50.270Z] Total : 65839.87 257.19 0.00 0.00 13500.97 10435.35 25004.50 00:07:17.831 00:07:17.831 real 0m2.721s 00:07:17.831 user 0m2.414s 00:07:17.831 sys 0m0.193s 00:07:17.831 17:23:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.831 ************************************ 00:07:17.831 END TEST bdev_write_zeroes 00:07:17.831 ************************************ 00:07:17.831 17:23:50 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:17.831 17:23:50 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.831 17:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:17.831 17:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.831 17:23:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.831 ************************************ 00:07:17.831 START TEST bdev_json_nonenclosed 00:07:17.831 ************************************ 00:07:17.831 17:23:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:17.831 [2024-12-07 17:23:51.067102] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:17.831 [2024-12-07 17:23:51.067218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62109 ] 00:07:18.090 [2024-12-07 17:23:51.226781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.090 [2024-12-07 17:23:51.325247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.090 [2024-12-07 17:23:51.325323] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:18.090 [2024-12-07 17:23:51.325339] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.090 [2024-12-07 17:23:51.325348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.351 00:07:18.351 real 0m0.499s 00:07:18.351 user 0m0.299s 00:07:18.351 sys 0m0.095s 00:07:18.351 ************************************ 00:07:18.351 END TEST bdev_json_nonenclosed 00:07:18.351 ************************************ 00:07:18.351 17:23:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.351 17:23:51 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:18.351 17:23:51 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.351 17:23:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.351 17:23:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.351 17:23:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.351 ************************************ 00:07:18.351 START TEST bdev_json_nonarray 00:07:18.351 ************************************ 00:07:18.351 17:23:51 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.351 [2024-12-07 17:23:51.619028] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:18.351 [2024-12-07 17:23:51.619142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62135 ] 00:07:18.611 [2024-12-07 17:23:51.777623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.611 [2024-12-07 17:23:51.876670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.611 [2024-12-07 17:23:51.876774] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:18.611 [2024-12-07 17:23:51.876798] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:18.611 [2024-12-07 17:23:51.876811] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.872 00:07:18.872 real 0m0.498s 00:07:18.872 user 0m0.298s 00:07:18.872 sys 0m0.097s 00:07:18.872 ************************************ 00:07:18.872 END TEST bdev_json_nonarray 00:07:18.872 ************************************ 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 17:23:52 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:18.872 17:23:52 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:18.872 17:23:52 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:18.872 17:23:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.872 17:23:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.872 17:23:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 ************************************ 00:07:18.872 START TEST bdev_gpt_uuid 00:07:18.872 ************************************ 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:18.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62160 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62160 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62160 ']' 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.872 17:23:52 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 [2024-12-07 17:23:52.198706] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:18.872 [2024-12-07 17:23:52.198828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:07:19.133 [2024-12-07 17:23:52.352029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.133 [2024-12-07 17:23:52.453351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.708 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.708 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:19.708 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:19.708 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.708 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.282 Some configs were skipped because the RPC state that can call them passed over. 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:20.282 { 00:07:20.282 "name": "Nvme1n1p1", 00:07:20.282 "aliases": [ 00:07:20.282 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:20.282 ], 00:07:20.282 "product_name": "GPT Disk", 00:07:20.282 "block_size": 4096, 00:07:20.282 "num_blocks": 655104, 00:07:20.282 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:20.282 "assigned_rate_limits": { 00:07:20.282 "rw_ios_per_sec": 0, 00:07:20.282 "rw_mbytes_per_sec": 0, 00:07:20.282 "r_mbytes_per_sec": 0, 00:07:20.282 "w_mbytes_per_sec": 0 00:07:20.282 }, 00:07:20.282 "claimed": false, 00:07:20.282 "zoned": false, 00:07:20.282 "supported_io_types": { 00:07:20.282 "read": true, 00:07:20.282 "write": true, 00:07:20.282 "unmap": true, 00:07:20.282 "flush": true, 00:07:20.282 "reset": true, 00:07:20.282 "nvme_admin": false, 00:07:20.282 "nvme_io": false, 00:07:20.282 "nvme_io_md": false, 00:07:20.282 "write_zeroes": true, 00:07:20.282 "zcopy": false, 00:07:20.282 "get_zone_info": false, 00:07:20.282 "zone_management": false, 00:07:20.282 "zone_append": false, 00:07:20.282 "compare": true, 00:07:20.282 "compare_and_write": false, 00:07:20.282 "abort": true, 00:07:20.282 "seek_hole": false, 00:07:20.282 "seek_data": false, 00:07:20.282 "copy": true, 00:07:20.282 "nvme_iov_md": false 00:07:20.282 }, 00:07:20.282 "driver_specific": { 00:07:20.282 "gpt": { 00:07:20.282 "base_bdev": "Nvme1n1", 00:07:20.282 "offset_blocks": 256, 00:07:20.282 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:20.282 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:20.282 "partition_name": "SPDK_TEST_first" 00:07:20.282 } 00:07:20.282 } 00:07:20.282 } 00:07:20.282 ]' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:20.282 { 00:07:20.282 "name": "Nvme1n1p2", 00:07:20.282 "aliases": [ 00:07:20.282 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:20.282 ], 00:07:20.282 "product_name": "GPT Disk", 00:07:20.282 "block_size": 4096, 00:07:20.282 "num_blocks": 655103, 00:07:20.282 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:20.282 "assigned_rate_limits": { 00:07:20.282 "rw_ios_per_sec": 0, 00:07:20.282 "rw_mbytes_per_sec": 0, 00:07:20.282 "r_mbytes_per_sec": 0, 00:07:20.282 "w_mbytes_per_sec": 0 00:07:20.282 }, 00:07:20.282 "claimed": false, 00:07:20.282 "zoned": false, 00:07:20.282 "supported_io_types": { 00:07:20.282 "read": true, 00:07:20.282 "write": true, 00:07:20.282 "unmap": true, 00:07:20.282 "flush": true, 00:07:20.282 "reset": true, 00:07:20.282 "nvme_admin": false, 00:07:20.282 "nvme_io": false, 00:07:20.282 "nvme_io_md": false, 00:07:20.282 "write_zeroes": true, 00:07:20.282 "zcopy": false, 00:07:20.282 "get_zone_info": false, 00:07:20.282 "zone_management": false, 00:07:20.282 "zone_append": false, 00:07:20.282 "compare": true, 00:07:20.282 "compare_and_write": false, 00:07:20.282 "abort": true, 00:07:20.282 "seek_hole": false, 00:07:20.282 "seek_data": false, 00:07:20.282 "copy": true, 00:07:20.282 "nvme_iov_md": false 00:07:20.282 }, 00:07:20.282 "driver_specific": { 00:07:20.282 "gpt": { 00:07:20.282 "base_bdev": "Nvme1n1", 00:07:20.282 "offset_blocks": 655360, 00:07:20.282 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:20.282 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:20.282 "partition_name": "SPDK_TEST_second" 00:07:20.282 } 00:07:20.282 } 00:07:20.282 } 00:07:20.282 ]' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62160 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62160 ']' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62160 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.282 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62160 00:07:20.544 killing process with pid 62160 00:07:20.544 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.544 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.544 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62160' 00:07:20.544 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62160 00:07:20.544 17:23:53 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62160 00:07:21.930 00:07:21.930 real 0m3.032s 00:07:21.930 user 0m3.224s 00:07:21.930 sys 0m0.367s 00:07:21.930 17:23:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.930 ************************************ 00:07:21.930 END TEST bdev_gpt_uuid 00:07:21.930 ************************************ 00:07:21.930 17:23:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:21.930 17:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:22.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.451 Waiting for block devices as requested 00:07:22.451 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.451 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.712 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.712 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.061 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:28.061 17:24:01 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:28.061 17:24:01 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:28.061 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.061 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.061 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:28.061 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:28.061 17:24:01 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:28.061 00:07:28.061 real 0m57.031s 00:07:28.061 user 1m13.724s 00:07:28.061 sys 0m7.771s 00:07:28.061 17:24:01 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.061 ************************************ 00:07:28.061 END TEST blockdev_nvme_gpt 00:07:28.061 17:24:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.061 ************************************ 00:07:28.061 17:24:01 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.061 17:24:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.061 17:24:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.061 17:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:28.061 ************************************ 00:07:28.061 START TEST nvme 00:07:28.061 ************************************ 00:07:28.061 17:24:01 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.323 * Looking for test storage... 00:07:28.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.323 17:24:01 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.323 17:24:01 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.323 17:24:01 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.323 17:24:01 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.323 17:24:01 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.323 17:24:01 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:28.323 17:24:01 nvme -- scripts/common.sh@345 -- # : 1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.323 17:24:01 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.323 17:24:01 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@353 -- # local d=1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.323 17:24:01 nvme -- scripts/common.sh@355 -- # echo 1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.323 17:24:01 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@353 -- # local d=2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.323 17:24:01 nvme -- scripts/common.sh@355 -- # echo 2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.323 17:24:01 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.323 17:24:01 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.323 17:24:01 nvme -- scripts/common.sh@368 -- # return 0 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.323 --rc genhtml_branch_coverage=1 00:07:28.323 --rc genhtml_function_coverage=1 00:07:28.323 --rc genhtml_legend=1 00:07:28.323 --rc geninfo_all_blocks=1 00:07:28.323 --rc geninfo_unexecuted_blocks=1 00:07:28.323 00:07:28.323 ' 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.323 --rc genhtml_branch_coverage=1 00:07:28.323 --rc genhtml_function_coverage=1 00:07:28.323 --rc genhtml_legend=1 00:07:28.323 --rc geninfo_all_blocks=1 00:07:28.323 --rc geninfo_unexecuted_blocks=1 00:07:28.323 00:07:28.323 ' 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.323 --rc genhtml_branch_coverage=1 00:07:28.323 --rc genhtml_function_coverage=1 00:07:28.323 --rc genhtml_legend=1 00:07:28.323 --rc geninfo_all_blocks=1 00:07:28.323 --rc geninfo_unexecuted_blocks=1 00:07:28.323 00:07:28.323 ' 00:07:28.323 17:24:01 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.323 --rc genhtml_branch_coverage=1 00:07:28.323 --rc genhtml_function_coverage=1 00:07:28.323 --rc genhtml_legend=1 00:07:28.323 --rc geninfo_all_blocks=1 00:07:28.323 --rc geninfo_unexecuted_blocks=1 00:07:28.323 00:07:28.323 ' 00:07:28.323 17:24:01 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.413 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.413 17:24:02 nvme -- nvme/nvme.sh@79 -- # uname 00:07:29.413 17:24:02 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:29.413 17:24:02 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:29.413 17:24:02 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:29.413 Waiting for stub to ready for secondary processes... 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1075 -- # stubpid=62797 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62797 ]] 00:07:29.413 17:24:02 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:29.414 17:24:02 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:29.414 [2024-12-07 17:24:02.665273] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:07:29.414 [2024-12-07 17:24:02.665396] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:30.352 [2024-12-07 17:24:03.459297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.352 [2024-12-07 17:24:03.558713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.352 [2024-12-07 17:24:03.559026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.352 [2024-12-07 17:24:03.559190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.352 [2024-12-07 17:24:03.572646] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:30.352 [2024-12-07 17:24:03.572872] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.352 [2024-12-07 17:24:03.587725] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:30.352 [2024-12-07 17:24:03.587916] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:30.352 [2024-12-07 17:24:03.591554] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.352 [2024-12-07 17:24:03.591854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:30.352 [2024-12-07 17:24:03.591950] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:30.352 [2024-12-07 17:24:03.595390] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.352 [2024-12-07 17:24:03.595709] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:30.352 [2024-12-07 17:24:03.595816] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:30.352 [2024-12-07 17:24:03.599924] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.353 [2024-12-07 17:24:03.600250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:30.353 [2024-12-07 17:24:03.600349] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:30.353 [2024-12-07 17:24:03.600417] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:30.353 [2024-12-07 17:24:03.600484] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:30.353 done. 00:07:30.353 17:24:03 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:30.353 17:24:03 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:30.353 17:24:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.353 17:24:03 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:30.353 17:24:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.353 17:24:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.353 ************************************ 00:07:30.353 START TEST nvme_reset 00:07:30.353 ************************************ 00:07:30.353 17:24:03 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.612 Initializing NVMe Controllers 00:07:30.612 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:30.612 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:30.612 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:30.612 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:30.612 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:30.612 00:07:30.612 real 0m0.231s 00:07:30.612 user 0m0.072s 00:07:30.612 sys 0m0.106s 00:07:30.612 17:24:03 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.612 ************************************ 00:07:30.612 END TEST nvme_reset 00:07:30.612 ************************************ 00:07:30.612 17:24:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 17:24:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:30.612 17:24:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.612 17:24:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.612 17:24:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.612 ************************************ 00:07:30.612 START TEST nvme_identify 00:07:30.612 ************************************ 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:30.612 17:24:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:30.612 17:24:03 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:30.612 17:24:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.612 17:24:03 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.612 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:30.876 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:30.876 17:24:03 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:30.876 17:24:03 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:30.876 [2024-12-07 17:24:04.183396] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62818 terminated unexpected 00:07:30.876 ===================================================== 00:07:30.876 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:30.876 ===================================================== 00:07:30.876 Controller Capabilities/Features 00:07:30.876 ================================ 00:07:30.876 Vendor ID: 1b36 00:07:30.876 Subsystem Vendor ID: 1af4 00:07:30.876 Serial Number: 12340 00:07:30.876 Model Number: QEMU NVMe Ctrl 00:07:30.876 Firmware Version: 8.0.0 00:07:30.876 Recommended Arb Burst: 6 00:07:30.876 IEEE OUI Identifier: 00 54 52 00:07:30.876 Multi-path I/O 00:07:30.876 May have multiple subsystem ports: No 00:07:30.876 May have multiple controllers: No 00:07:30.876 Associated with SR-IOV VF: No 00:07:30.876 Max Data Transfer Size: 524288 00:07:30.876 Max Number of Namespaces: 256 00:07:30.876 Max Number of I/O Queues: 64 00:07:30.876 NVMe Specification Version (VS): 1.4 00:07:30.876 NVMe Specification Version (Identify): 1.4 00:07:30.876 Maximum Queue Entries: 2048 00:07:30.876 Contiguous Queues Required: Yes 00:07:30.876 Arbitration Mechanisms Supported 00:07:30.876 Weighted Round Robin: Not Supported 00:07:30.876 Vendor Specific: Not Supported 00:07:30.877 Reset Timeout: 7500 ms 00:07:30.877 Doorbell Stride: 4 bytes 00:07:30.877 NVM Subsystem Reset: Not Supported 00:07:30.877 Command Sets Supported 00:07:30.877 NVM Command Set: Supported 00:07:30.877 Boot Partition: Not Supported 00:07:30.877 Memory Page Size Minimum: 4096 bytes 00:07:30.877 Memory Page Size Maximum: 65536 bytes 00:07:30.877 Persistent Memory Region: Not Supported 00:07:30.877 Optional Asynchronous Events Supported 00:07:30.877 Namespace Attribute Notices: Supported 00:07:30.877 Firmware Activation Notices: Not Supported 00:07:30.877 ANA Change Notices: Not Supported 00:07:30.877 PLE Aggregate Log Change Notices: Not Supported 00:07:30.877 LBA Status Info Alert Notices: Not Supported 00:07:30.877 EGE Aggregate Log Change Notices: Not Supported 00:07:30.877 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.877 Zone Descriptor Change Notices: Not Supported 00:07:30.877 Discovery Log Change Notices: Not Supported 00:07:30.877 Controller Attributes 00:07:30.877 128-bit Host Identifier: Not Supported 00:07:30.877 Non-Operational Permissive Mode: Not Supported 00:07:30.877 NVM Sets: Not Supported 00:07:30.877 Read Recovery Levels: Not Supported 00:07:30.877 Endurance Groups: Not Supported 00:07:30.877 Predictable Latency Mode: Not Supported 00:07:30.877 Traffic Based Keep ALive: Not Supported 00:07:30.877 Namespace Granularity: Not Supported 00:07:30.877 SQ Associations: Not Supported 00:07:30.877 UUID List: Not Supported 00:07:30.877 Multi-Domain Subsystem: Not Supported 00:07:30.877 Fixed Capacity Management: Not Supported 00:07:30.877 Variable Capacity Management: Not Supported 00:07:30.877 Delete Endurance Group: Not Supported 00:07:30.877 Delete NVM Set: Not Supported 00:07:30.877 Extended LBA Formats Supported: Supported 00:07:30.877 Flexible Data Placement Supported: Not Supported 00:07:30.877 00:07:30.877 Controller Memory Buffer Support 00:07:30.877 ================================ 00:07:30.877 Supported: No 00:07:30.877 00:07:30.877 Persistent Memory Region Support 00:07:30.877 ================================ 00:07:30.877 Supported: No 00:07:30.877 00:07:30.877 Admin Command Set Attributes 00:07:30.877 ============================ 00:07:30.877 Security Send/Receive: Not Supported 00:07:30.877 Format NVM: Supported 00:07:30.877 Firmware Activate/Download: Not Supported 00:07:30.877 Namespace Management: Supported 00:07:30.877 Device Self-Test: Not Supported 00:07:30.877 Directives: Supported 00:07:30.877 NVMe-MI: Not Supported 00:07:30.877 Virtualization Management: Not Supported 00:07:30.877 Doorbell Buffer Config: Supported 00:07:30.877 Get LBA Status Capability: Not Supported 00:07:30.877 Command & Feature Lockdown Capability: Not Supported 00:07:30.877 Abort Command Limit: 4 00:07:30.877 Async Event Request Limit: 4 00:07:30.877 Number of Firmware Slots: N/A 00:07:30.877 Firmware Slot 1 Read-Only: N/A 00:07:30.877 Firmware Activation Without Reset: N/A 00:07:30.877 Multiple Update Detection Support: N/A 00:07:30.877 Firmware Update Granularity: No Information Provided 00:07:30.877 Per-Namespace SMART Log: Yes 00:07:30.877 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.877 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:30.877 Command Effects Log Page: Supported 00:07:30.877 Get Log Page Extended Data: Supported 00:07:30.877 Telemetry Log Pages: Not Supported 00:07:30.877 Persistent Event Log Pages: Not Supported 00:07:30.877 Supported Log Pages Log Page: May Support 00:07:30.877 Commands Supported & Effects Log Page: Not Supported 00:07:30.877 Feature Identifiers & Effects Log Page:May Support 00:07:30.877 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.877 Data Area 4 for Telemetry Log: Not Supported 00:07:30.877 Error Log Page Entries Supported: 1 00:07:30.877 Keep Alive: Not Supported 00:07:30.877 00:07:30.877 NVM Command Set Attributes 00:07:30.877 ========================== 00:07:30.877 Submission Queue Entry Size 00:07:30.877 Max: 64 00:07:30.877 Min: 64 00:07:30.877 Completion Queue Entry Size 00:07:30.877 Max: 16 00:07:30.877 Min: 16 00:07:30.877 Number of Namespaces: 256 00:07:30.877 Compare Command: Supported 00:07:30.877 Write Uncorrectable Command: Not Supported 00:07:30.877 Dataset Management Command: Supported 00:07:30.877 Write Zeroes Command: Supported 00:07:30.877 Set Features Save Field: Supported 00:07:30.877 Reservations: Not Supported 00:07:30.877 Timestamp: Supported 00:07:30.877 Copy: Supported 00:07:30.877 Volatile Write Cache: Present 00:07:30.877 Atomic Write Unit (Normal): 1 00:07:30.877 Atomic Write Unit (PFail): 1 00:07:30.877 Atomic Compare & Write Unit: 1 00:07:30.877 Fused Compare & Write: Not Supported 00:07:30.877 Scatter-Gather List 00:07:30.877 SGL Command Set: Supported 00:07:30.877 SGL Keyed: Not Supported 00:07:30.877 SGL Bit Bucket Descriptor: Not Supported 00:07:30.877 SGL Metadata Pointer: Not Supported 00:07:30.877 Oversized SGL: Not Supported 00:07:30.877 SGL Metadata Address: Not Supported 00:07:30.877 SGL Offset: Not Supported 00:07:30.877 Transport SGL Data Block: Not Supported 00:07:30.877 Replay Protected Memory Block: Not Supported 00:07:30.877 00:07:30.877 Firmware Slot Information 00:07:30.877 ========================= 00:07:30.877 Active slot: 1 00:07:30.877 Slot 1 Firmware Revision: 1.0 00:07:30.877 00:07:30.877 00:07:30.877 Commands Supported and Effects 00:07:30.877 ============================== 00:07:30.877 Admin Commands 00:07:30.877 -------------- 00:07:30.877 Delete I/O Submission Queue (00h): Supported 00:07:30.877 Create I/O Submission Queue (01h): Supported 00:07:30.877 Get Log Page (02h): Supported 00:07:30.877 Delete I/O Completion Queue (04h): Supported 00:07:30.877 Create I/O Completion Queue (05h): Supported 00:07:30.877 Identify (06h): Supported 00:07:30.877 Abort (08h): Supported 00:07:30.877 Set Features (09h): Supported 00:07:30.877 Get Features (0Ah): Supported 00:07:30.877 Asynchronous Event Request (0Ch): Supported 00:07:30.877 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.877 Directive Send (19h): Supported 00:07:30.877 Directive Receive (1Ah): Supported 00:07:30.877 Virtualization Management (1Ch): Supported 00:07:30.877 Doorbell Buffer Config (7Ch): Supported 00:07:30.877 Format NVM (80h): Supported LBA-Change 00:07:30.877 I/O Commands 00:07:30.877 ------------ 00:07:30.877 Flush (00h): Supported LBA-Change 00:07:30.877 Write (01h): Supported LBA-Change 00:07:30.877 Read (02h): Supported 00:07:30.877 Compare (05h): Supported 00:07:30.877 Write Zeroes (08h): Supported LBA-Change 00:07:30.877 Dataset Management (09h): Supported LBA-Change 00:07:30.877 Unknown (0Ch): Supported 00:07:30.877 Unknown (12h): Supported 00:07:30.877 Copy (19h): Supported LBA-Change 00:07:30.877 Unknown (1Dh): Supported LBA-Change 00:07:30.877 00:07:30.877 Error Log 00:07:30.877 ========= 00:07:30.877 00:07:30.877 Arbitration 00:07:30.877 =========== 00:07:30.877 Arbitration Burst: no limit 00:07:30.877 00:07:30.877 Power Management 00:07:30.877 ================ 00:07:30.877 Number of Power States: 1 00:07:30.877 Current Power State: Power State #0 00:07:30.877 Power State #0: 00:07:30.877 Max Power: 25.00 W 00:07:30.877 Non-Operational State: Operational 00:07:30.877 Entry Latency: 16 microseconds 00:07:30.877 Exit Latency: 4 microseconds 00:07:30.877 Relative Read Throughput: 0 00:07:30.877 Relative Read Latency: 0 00:07:30.877 Relative Write Throughput: 0 00:07:30.877 Relative Write Latency: 0 00:07:30.877 Idle Power[2024-12-07 17:24:04.185921] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62818 terminated unexpected 00:07:30.877 : Not Reported 00:07:30.877 Active Power: Not Reported 00:07:30.877 Non-Operational Permissive Mode: Not Supported 00:07:30.877 00:07:30.877 Health Information 00:07:30.877 ================== 00:07:30.877 Critical Warnings: 00:07:30.877 Available Spare Space: OK 00:07:30.877 Temperature: OK 00:07:30.877 Device Reliability: OK 00:07:30.877 Read Only: No 00:07:30.877 Volatile Memory Backup: OK 00:07:30.877 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.877 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.877 Available Spare: 0% 00:07:30.878 Available Spare Threshold: 0% 00:07:30.878 Life Percentage Used: 0% 00:07:30.878 Data Units Read: 684 00:07:30.878 Data Units Written: 612 00:07:30.878 Host Read Commands: 39564 00:07:30.878 Host Write Commands: 39350 00:07:30.878 Controller Busy Time: 0 minutes 00:07:30.878 Power Cycles: 0 00:07:30.878 Power On Hours: 0 hours 00:07:30.878 Unsafe Shutdowns: 0 00:07:30.878 Unrecoverable Media Errors: 0 00:07:30.878 Lifetime Error Log Entries: 0 00:07:30.878 Warning Temperature Time: 0 minutes 00:07:30.878 Critical Temperature Time: 0 minutes 00:07:30.878 00:07:30.878 Number of Queues 00:07:30.878 ================ 00:07:30.878 Number of I/O Submission Queues: 64 00:07:30.878 Number of I/O Completion Queues: 64 00:07:30.878 00:07:30.878 ZNS Specific Controller Data 00:07:30.878 ============================ 00:07:30.878 Zone Append Size Limit: 0 00:07:30.878 00:07:30.878 00:07:30.878 Active Namespaces 00:07:30.878 ================= 00:07:30.878 Namespace ID:1 00:07:30.878 Error Recovery Timeout: Unlimited 00:07:30.878 Command Set Identifier: NVM (00h) 00:07:30.878 Deallocate: Supported 00:07:30.878 Deallocated/Unwritten Error: Supported 00:07:30.878 Deallocated Read Value: All 0x00 00:07:30.878 Deallocate in Write Zeroes: Not Supported 00:07:30.878 Deallocated Guard Field: 0xFFFF 00:07:30.878 Flush: Supported 00:07:30.878 Reservation: Not Supported 00:07:30.878 Metadata Transferred as: Separate Metadata Buffer 00:07:30.878 Namespace Sharing Capabilities: Private 00:07:30.878 Size (in LBAs): 1548666 (5GiB) 00:07:30.878 Capacity (in LBAs): 1548666 (5GiB) 00:07:30.878 Utilization (in LBAs): 1548666 (5GiB) 00:07:30.878 Thin Provisioning: Not Supported 00:07:30.878 Per-NS Atomic Units: No 00:07:30.878 Maximum Single Source Range Length: 128 00:07:30.878 Maximum Copy Length: 128 00:07:30.878 Maximum Source Range Count: 128 00:07:30.878 NGUID/EUI64 Never Reused: No 00:07:30.878 Namespace Write Protected: No 00:07:30.878 Number of LBA Formats: 8 00:07:30.878 Current LBA Format: LBA Format #07 00:07:30.878 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.878 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.878 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.878 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.878 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.878 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.878 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.878 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.878 00:07:30.878 NVM Specific Namespace Data 00:07:30.878 =========================== 00:07:30.878 Logical Block Storage Tag Mask: 0 00:07:30.878 Protection Information Capabilities: 00:07:30.878 16b Guard Protection Information Storage Tag Support: No 00:07:30.878 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.878 Storage Tag Check Read Support: No 00:07:30.878 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.878 ===================================================== 00:07:30.878 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:30.878 ===================================================== 00:07:30.878 Controller Capabilities/Features 00:07:30.878 ================================ 00:07:30.878 Vendor ID: 1b36 00:07:30.878 Subsystem Vendor ID: 1af4 00:07:30.878 Serial Number: 12341 00:07:30.878 Model Number: QEMU NVMe Ctrl 00:07:30.878 Firmware Version: 8.0.0 00:07:30.878 Recommended Arb Burst: 6 00:07:30.878 IEEE OUI Identifier: 00 54 52 00:07:30.878 Multi-path I/O 00:07:30.878 May have multiple subsystem ports: No 00:07:30.878 May have multiple controllers: No 00:07:30.878 Associated with SR-IOV VF: No 00:07:30.878 Max Data Transfer Size: 524288 00:07:30.878 Max Number of Namespaces: 256 00:07:30.878 Max Number of I/O Queues: 64 00:07:30.878 NVMe Specification Version (VS): 1.4 00:07:30.878 NVMe Specification Version (Identify): 1.4 00:07:30.878 Maximum Queue Entries: 2048 00:07:30.878 Contiguous Queues Required: Yes 00:07:30.878 Arbitration Mechanisms Supported 00:07:30.878 Weighted Round Robin: Not Supported 00:07:30.878 Vendor Specific: Not Supported 00:07:30.878 Reset Timeout: 7500 ms 00:07:30.878 Doorbell Stride: 4 bytes 00:07:30.878 NVM Subsystem Reset: Not Supported 00:07:30.878 Command Sets Supported 00:07:30.878 NVM Command Set: Supported 00:07:30.878 Boot Partition: Not Supported 00:07:30.878 Memory Page Size Minimum: 4096 bytes 00:07:30.878 Memory Page Size Maximum: 65536 bytes 00:07:30.878 Persistent Memory Region: Not Supported 00:07:30.878 Optional Asynchronous Events Supported 00:07:30.878 Namespace Attribute Notices: Supported 00:07:30.878 Firmware Activation Notices: Not Supported 00:07:30.878 ANA Change Notices: Not Supported 00:07:30.878 PLE Aggregate Log Change Notices: Not Supported 00:07:30.878 LBA Status Info Alert Notices: Not Supported 00:07:30.878 EGE Aggregate Log Change Notices: Not Supported 00:07:30.878 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.878 Zone Descriptor Change Notices: Not Supported 00:07:30.878 Discovery Log Change Notices: Not Supported 00:07:30.878 Controller Attributes 00:07:30.878 128-bit Host Identifier: Not Supported 00:07:30.878 Non-Operational Permissive Mode: Not Supported 00:07:30.878 NVM Sets: Not Supported 00:07:30.878 Read Recovery Levels: Not Supported 00:07:30.878 Endurance Groups: Not Supported 00:07:30.878 Predictable Latency Mode: Not Supported 00:07:30.878 Traffic Based Keep ALive: Not Supported 00:07:30.878 Namespace Granularity: Not Supported 00:07:30.878 SQ Associations: Not Supported 00:07:30.878 UUID List: Not Supported 00:07:30.878 Multi-Domain Subsystem: Not Supported 00:07:30.878 Fixed Capacity Management: Not Supported 00:07:30.878 Variable Capacity Management: Not Supported 00:07:30.878 Delete Endurance Group: Not Supported 00:07:30.878 Delete NVM Set: Not Supported 00:07:30.878 Extended LBA Formats Supported: Supported 00:07:30.878 Flexible Data Placement Supported: Not Supported 00:07:30.878 00:07:30.878 Controller Memory Buffer Support 00:07:30.878 ================================ 00:07:30.878 Supported: No 00:07:30.878 00:07:30.878 Persistent Memory Region Support 00:07:30.878 ================================ 00:07:30.878 Supported: No 00:07:30.878 00:07:30.878 Admin Command Set Attributes 00:07:30.878 ============================ 00:07:30.878 Security Send/Receive: Not Supported 00:07:30.878 Format NVM: Supported 00:07:30.878 Firmware Activate/Download: Not Supported 00:07:30.878 Namespace Management: Supported 00:07:30.878 Device Self-Test: Not Supported 00:07:30.878 Directives: Supported 00:07:30.878 NVMe-MI: Not Supported 00:07:30.878 Virtualization Management: Not Supported 00:07:30.878 Doorbell Buffer Config: Supported 00:07:30.878 Get LBA Status Capability: Not Supported 00:07:30.878 Command & Feature Lockdown Capability: Not Supported 00:07:30.878 Abort Command Limit: 4 00:07:30.878 Async Event Request Limit: 4 00:07:30.878 Number of Firmware Slots: N/A 00:07:30.878 Firmware Slot 1 Read-Only: N/A 00:07:30.878 Firmware Activation Without Reset: N/A 00:07:30.878 Multiple Update Detection Support: N/A 00:07:30.878 Firmware Update Granularity: No Information Provided 00:07:30.878 Per-Namespace SMART Log: Yes 00:07:30.878 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.878 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:30.878 Command Effects Log Page: Supported 00:07:30.878 Get Log Page Extended Data: Supported 00:07:30.878 Telemetry Log Pages: Not Supported 00:07:30.878 Persistent Event Log Pages: Not Supported 00:07:30.878 Supported Log Pages Log Page: May Support 00:07:30.878 Commands Supported & Effects Log Page: Not Supported 00:07:30.878 Feature Identifiers & Effects Log Page:May Support 00:07:30.878 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.878 Data Area 4 for Telemetry Log: Not Supported 00:07:30.878 Error Log Page Entries Supported: 1 00:07:30.878 Keep Alive: Not Supported 00:07:30.878 00:07:30.878 NVM Command Set Attributes 00:07:30.878 ========================== 00:07:30.878 Submission Queue Entry Size 00:07:30.878 Max: 64 00:07:30.878 Min: 64 00:07:30.878 Completion Queue Entry Size 00:07:30.879 Max: 16 00:07:30.879 Min: 16 00:07:30.879 Number of Namespaces: 256 00:07:30.879 Compare Command: Supported 00:07:30.879 Write Uncorrectable Command: Not Supported 00:07:30.879 Dataset Management Command: Supported 00:07:30.879 Write Zeroes Command: Supported 00:07:30.879 Set Features Save Field: Supported 00:07:30.879 Reservations: Not Supported 00:07:30.879 Timestamp: Supported 00:07:30.879 Copy: Supported 00:07:30.879 Volatile Write Cache: Present 00:07:30.879 Atomic Write Unit (Normal): 1 00:07:30.879 Atomic Write Unit (PFail): 1 00:07:30.879 Atomic Compare & Write Unit: 1 00:07:30.879 Fused Compare & Write: Not Supported 00:07:30.879 Scatter-Gather List 00:07:30.879 SGL Command Set: Supported 00:07:30.879 SGL Keyed: Not Supported 00:07:30.879 SGL Bit Bucket Descriptor: Not Supported 00:07:30.879 SGL Metadata Pointer: Not Supported 00:07:30.879 Oversized SGL: Not Supported 00:07:30.879 SGL Metadata Address: Not Supported 00:07:30.879 SGL Offset: Not Supported 00:07:30.879 Transport SGL Data Block: Not Supported 00:07:30.879 Replay Protected Memory Block: Not Supported 00:07:30.879 00:07:30.879 Firmware Slot Information 00:07:30.879 ========================= 00:07:30.879 Active slot: 1 00:07:30.879 Slot 1 Firmware Revision: 1.0 00:07:30.879 00:07:30.879 00:07:30.879 Commands Supported and Effects 00:07:30.879 ============================== 00:07:30.879 Admin Commands 00:07:30.879 -------------- 00:07:30.879 Delete I/O Submission Queue (00h): Supported 00:07:30.879 Create I/O Submission Queue (01h): Supported 00:07:30.879 Get Log Page (02h): Supported 00:07:30.879 Delete I/O Completion Queue (04h): Supported 00:07:30.879 Create I/O Completion Queue (05h): Supported 00:07:30.879 Identify (06h): Supported 00:07:30.879 Abort (08h): Supported 00:07:30.879 Set Features (09h): Supported 00:07:30.879 Get Features (0Ah): Supported 00:07:30.879 Asynchronous Event Request (0Ch): Supported 00:07:30.879 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.879 Directive Send (19h): Supported 00:07:30.879 Directive Receive (1Ah): Supported 00:07:30.879 Virtualization Management (1Ch): Supported 00:07:30.879 Doorbell Buffer Config (7Ch): Supported 00:07:30.879 Format NVM (80h): Supported LBA-Change 00:07:30.879 I/O Commands 00:07:30.879 ------------ 00:07:30.879 Flush (00h): Supported LBA-Change 00:07:30.879 Write (01h): Supported LBA-Change 00:07:30.879 Read (02h): Supported 00:07:30.879 Compare (05h): Supported 00:07:30.879 Write Zeroes (08h): Supported LBA-Change 00:07:30.879 Dataset Management (09h): Supported LBA-Change 00:07:30.879 Unknown (0Ch): Supported 00:07:30.879 Unknown (12h): Supported 00:07:30.879 Copy (19h): Supported LBA-Change 00:07:30.879 Unknown (1Dh): Supported LBA-Change 00:07:30.879 00:07:30.879 Error Log 00:07:30.879 ========= 00:07:30.879 00:07:30.879 Arbitration 00:07:30.879 =========== 00:07:30.879 Arbitration Burst: no limit 00:07:30.879 00:07:30.879 Power Management 00:07:30.879 ================ 00:07:30.879 Number of Power States: 1 00:07:30.879 Current Power State: Power State #0 00:07:30.879 Power State #0: 00:07:30.879 Max Power: 25.00 W 00:07:30.879 Non-Operational State: Operational 00:07:30.879 Entry Latency: 16 microseconds 00:07:30.879 Exit Latency: 4 microseconds 00:07:30.879 Relative Read Throughput: 0 00:07:30.879 Relative Read Latency: 0 00:07:30.879 Relative Write Throughput: 0 00:07:30.879 Relative Write Latency: 0 00:07:30.879 Idle Power: Not Reported 00:07:30.879 Active Power: Not Reported 00:07:30.879 Non-Operational Permissive Mode: Not Supported 00:07:30.879 00:07:30.879 Health Information 00:07:30.879 ================== 00:07:30.879 Critical Warnings: 00:07:30.879 Available Spare Space: OK 00:07:30.879 Temperature: [2024-12-07 17:24:04.187249] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62818 terminated unexpected 00:07:30.879 OK 00:07:30.879 Device Reliability: OK 00:07:30.879 Read Only: No 00:07:30.879 Volatile Memory Backup: OK 00:07:30.879 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.879 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.879 Available Spare: 0% 00:07:30.879 Available Spare Threshold: 0% 00:07:30.879 Life Percentage Used: 0% 00:07:30.879 Data Units Read: 1090 00:07:30.879 Data Units Written: 951 00:07:30.879 Host Read Commands: 59084 00:07:30.879 Host Write Commands: 57780 00:07:30.879 Controller Busy Time: 0 minutes 00:07:30.879 Power Cycles: 0 00:07:30.879 Power On Hours: 0 hours 00:07:30.879 Unsafe Shutdowns: 0 00:07:30.879 Unrecoverable Media Errors: 0 00:07:30.879 Lifetime Error Log Entries: 0 00:07:30.879 Warning Temperature Time: 0 minutes 00:07:30.879 Critical Temperature Time: 0 minutes 00:07:30.879 00:07:30.879 Number of Queues 00:07:30.879 ================ 00:07:30.879 Number of I/O Submission Queues: 64 00:07:30.879 Number of I/O Completion Queues: 64 00:07:30.879 00:07:30.879 ZNS Specific Controller Data 00:07:30.879 ============================ 00:07:30.879 Zone Append Size Limit: 0 00:07:30.879 00:07:30.879 00:07:30.879 Active Namespaces 00:07:30.879 ================= 00:07:30.879 Namespace ID:1 00:07:30.879 Error Recovery Timeout: Unlimited 00:07:30.879 Command Set Identifier: NVM (00h) 00:07:30.879 Deallocate: Supported 00:07:30.879 Deallocated/Unwritten Error: Supported 00:07:30.879 Deallocated Read Value: All 0x00 00:07:30.879 Deallocate in Write Zeroes: Not Supported 00:07:30.879 Deallocated Guard Field: 0xFFFF 00:07:30.879 Flush: Supported 00:07:30.879 Reservation: Not Supported 00:07:30.879 Namespace Sharing Capabilities: Private 00:07:30.879 Size (in LBAs): 1310720 (5GiB) 00:07:30.879 Capacity (in LBAs): 1310720 (5GiB) 00:07:30.879 Utilization (in LBAs): 1310720 (5GiB) 00:07:30.879 Thin Provisioning: Not Supported 00:07:30.879 Per-NS Atomic Units: No 00:07:30.879 Maximum Single Source Range Length: 128 00:07:30.879 Maximum Copy Length: 128 00:07:30.879 Maximum Source Range Count: 128 00:07:30.879 NGUID/EUI64 Never Reused: No 00:07:30.879 Namespace Write Protected: No 00:07:30.879 Number of LBA Formats: 8 00:07:30.879 Current LBA Format: LBA Format #04 00:07:30.879 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.879 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.879 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.879 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.879 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.879 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.879 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.879 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.879 00:07:30.879 NVM Specific Namespace Data 00:07:30.879 =========================== 00:07:30.879 Logical Block Storage Tag Mask: 0 00:07:30.879 Protection Information Capabilities: 00:07:30.879 16b Guard Protection Information Storage Tag Support: No 00:07:30.879 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.879 Storage Tag Check Read Support: No 00:07:30.879 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.879 ===================================================== 00:07:30.879 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:30.879 ===================================================== 00:07:30.879 Controller Capabilities/Features 00:07:30.879 ================================ 00:07:30.879 Vendor ID: 1b36 00:07:30.879 Subsystem Vendor ID: 1af4 00:07:30.879 Serial Number: 12343 00:07:30.879 Model Number: QEMU NVMe Ctrl 00:07:30.879 Firmware Version: 8.0.0 00:07:30.879 Recommended Arb Burst: 6 00:07:30.879 IEEE OUI Identifier: 00 54 52 00:07:30.879 Multi-path I/O 00:07:30.879 May have multiple subsystem ports: No 00:07:30.879 May have multiple controllers: Yes 00:07:30.879 Associated with SR-IOV VF: No 00:07:30.879 Max Data Transfer Size: 524288 00:07:30.879 Max Number of Namespaces: 256 00:07:30.879 Max Number of I/O Queues: 64 00:07:30.879 NVMe Specification Version (VS): 1.4 00:07:30.879 NVMe Specification Version (Identify): 1.4 00:07:30.879 Maximum Queue Entries: 2048 00:07:30.879 Contiguous Queues Required: Yes 00:07:30.879 Arbitration Mechanisms Supported 00:07:30.879 Weighted Round Robin: Not Supported 00:07:30.880 Vendor Specific: Not Supported 00:07:30.880 Reset Timeout: 7500 ms 00:07:30.880 Doorbell Stride: 4 bytes 00:07:30.880 NVM Subsystem Reset: Not Supported 00:07:30.880 Command Sets Supported 00:07:30.880 NVM Command Set: Supported 00:07:30.880 Boot Partition: Not Supported 00:07:30.880 Memory Page Size Minimum: 4096 bytes 00:07:30.880 Memory Page Size Maximum: 65536 bytes 00:07:30.880 Persistent Memory Region: Not Supported 00:07:30.880 Optional Asynchronous Events Supported 00:07:30.880 Namespace Attribute Notices: Supported 00:07:30.880 Firmware Activation Notices: Not Supported 00:07:30.880 ANA Change Notices: Not Supported 00:07:30.880 PLE Aggregate Log Change Notices: Not Supported 00:07:30.880 LBA Status Info Alert Notices: Not Supported 00:07:30.880 EGE Aggregate Log Change Notices: Not Supported 00:07:30.880 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.880 Zone Descriptor Change Notices: Not Supported 00:07:30.880 Discovery Log Change Notices: Not Supported 00:07:30.880 Controller Attributes 00:07:30.880 128-bit Host Identifier: Not Supported 00:07:30.880 Non-Operational Permissive Mode: Not Supported 00:07:30.880 NVM Sets: Not Supported 00:07:30.880 Read Recovery Levels: Not Supported 00:07:30.880 Endurance Groups: Supported 00:07:30.880 Predictable Latency Mode: Not Supported 00:07:30.880 Traffic Based Keep ALive: Not Supported 00:07:30.880 Namespace Granularity: Not Supported 00:07:30.880 SQ Associations: Not Supported 00:07:30.880 UUID List: Not Supported 00:07:30.880 Multi-Domain Subsystem: Not Supported 00:07:30.880 Fixed Capacity Management: Not Supported 00:07:30.880 Variable Capacity Management: Not Supported 00:07:30.880 Delete Endurance Group: Not Supported 00:07:30.880 Delete NVM Set: Not Supported 00:07:30.880 Extended LBA Formats Supported: Supported 00:07:30.880 Flexible Data Placement Supported: Supported 00:07:30.880 00:07:30.880 Controller Memory Buffer Support 00:07:30.880 ================================ 00:07:30.880 Supported: No 00:07:30.880 00:07:30.880 Persistent Memory Region Support 00:07:30.880 ================================ 00:07:30.880 Supported: No 00:07:30.880 00:07:30.880 Admin Command Set Attributes 00:07:30.880 ============================ 00:07:30.880 Security Send/Receive: Not Supported 00:07:30.880 Format NVM: Supported 00:07:30.880 Firmware Activate/Download: Not Supported 00:07:30.880 Namespace Management: Supported 00:07:30.880 Device Self-Test: Not Supported 00:07:30.880 Directives: Supported 00:07:30.880 NVMe-MI: Not Supported 00:07:30.880 Virtualization Management: Not Supported 00:07:30.880 Doorbell Buffer Config: Supported 00:07:30.880 Get LBA Status Capability: Not Supported 00:07:30.880 Command & Feature Lockdown Capability: Not Supported 00:07:30.880 Abort Command Limit: 4 00:07:30.880 Async Event Request Limit: 4 00:07:30.880 Number of Firmware Slots: N/A 00:07:30.880 Firmware Slot 1 Read-Only: N/A 00:07:30.880 Firmware Activation Without Reset: N/A 00:07:30.880 Multiple Update Detection Support: N/A 00:07:30.880 Firmware Update Granularity: No Information Provided 00:07:30.880 Per-Namespace SMART Log: Yes 00:07:30.880 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.880 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:30.880 Command Effects Log Page: Supported 00:07:30.880 Get Log Page Extended Data: Supported 00:07:30.880 Telemetry Log Pages: Not Supported 00:07:30.880 Persistent Event Log Pages: Not Supported 00:07:30.880 Supported Log Pages Log Page: May Support 00:07:30.880 Commands Supported & Effects Log Page: Not Supported 00:07:30.880 Feature Identifiers & Effects Log Page:May Support 00:07:30.880 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.880 Data Area 4 for Telemetry Log: Not Supported 00:07:30.880 Error Log Page Entries Supported: 1 00:07:30.880 Keep Alive: Not Supported 00:07:30.880 00:07:30.880 NVM Command Set Attributes 00:07:30.880 ========================== 00:07:30.880 Submission Queue Entry Size 00:07:30.880 Max: 64 00:07:30.880 Min: 64 00:07:30.880 Completion Queue Entry Size 00:07:30.880 Max: 16 00:07:30.880 Min: 16 00:07:30.880 Number of Namespaces: 256 00:07:30.880 Compare Command: Supported 00:07:30.880 Write Uncorrectable Command: Not Supported 00:07:30.880 Dataset Management Command: Supported 00:07:30.880 Write Zeroes Command: Supported 00:07:30.880 Set Features Save Field: Supported 00:07:30.880 Reservations: Not Supported 00:07:30.880 Timestamp: Supported 00:07:30.880 Copy: Supported 00:07:30.880 Volatile Write Cache: Present 00:07:30.880 Atomic Write Unit (Normal): 1 00:07:30.880 Atomic Write Unit (PFail): 1 00:07:30.880 Atomic Compare & Write Unit: 1 00:07:30.880 Fused Compare & Write: Not Supported 00:07:30.880 Scatter-Gather List 00:07:30.880 SGL Command Set: Supported 00:07:30.880 SGL Keyed: Not Supported 00:07:30.880 SGL Bit Bucket Descriptor: Not Supported 00:07:30.880 SGL Metadata Pointer: Not Supported 00:07:30.880 Oversized SGL: Not Supported 00:07:30.880 SGL Metadata Address: Not Supported 00:07:30.880 SGL Offset: Not Supported 00:07:30.880 Transport SGL Data Block: Not Supported 00:07:30.880 Replay Protected Memory Block: Not Supported 00:07:30.880 00:07:30.880 Firmware Slot Information 00:07:30.880 ========================= 00:07:30.880 Active slot: 1 00:07:30.880 Slot 1 Firmware Revision: 1.0 00:07:30.880 00:07:30.880 00:07:30.880 Commands Supported and Effects 00:07:30.880 ============================== 00:07:30.880 Admin Commands 00:07:30.880 -------------- 00:07:30.880 Delete I/O Submission Queue (00h): Supported 00:07:30.880 Create I/O Submission Queue (01h): Supported 00:07:30.880 Get Log Page (02h): Supported 00:07:30.880 Delete I/O Completion Queue (04h): Supported 00:07:30.880 Create I/O Completion Queue (05h): Supported 00:07:30.880 Identify (06h): Supported 00:07:30.880 Abort (08h): Supported 00:07:30.880 Set Features (09h): Supported 00:07:30.880 Get Features (0Ah): Supported 00:07:30.880 Asynchronous Event Request (0Ch): Supported 00:07:30.880 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.880 Directive Send (19h): Supported 00:07:30.880 Directive Receive (1Ah): Supported 00:07:30.880 Virtualization Management (1Ch): Supported 00:07:30.880 Doorbell Buffer Config (7Ch): Supported 00:07:30.880 Format NVM (80h): Supported LBA-Change 00:07:30.880 I/O Commands 00:07:30.880 ------------ 00:07:30.880 Flush (00h): Supported LBA-Change 00:07:30.880 Write (01h): Supported LBA-Change 00:07:30.880 Read (02h): Supported 00:07:30.880 Compare (05h): Supported 00:07:30.880 Write Zeroes (08h): Supported LBA-Change 00:07:30.880 Dataset Management (09h): Supported LBA-Change 00:07:30.880 Unknown (0Ch): Supported 00:07:30.880 Unknown (12h): Supported 00:07:30.880 Copy (19h): Supported LBA-Change 00:07:30.880 Unknown (1Dh): Supported LBA-Change 00:07:30.880 00:07:30.880 Error Log 00:07:30.880 ========= 00:07:30.880 00:07:30.880 Arbitration 00:07:30.880 =========== 00:07:30.880 Arbitration Burst: no limit 00:07:30.880 00:07:30.880 Power Management 00:07:30.880 ================ 00:07:30.880 Number of Power States: 1 00:07:30.880 Current Power State: Power State #0 00:07:30.880 Power State #0: 00:07:30.880 Max Power: 25.00 W 00:07:30.880 Non-Operational State: Operational 00:07:30.880 Entry Latency: 16 microseconds 00:07:30.880 Exit Latency: 4 microseconds 00:07:30.880 Relative Read Throughput: 0 00:07:30.880 Relative Read Latency: 0 00:07:30.880 Relative Write Throughput: 0 00:07:30.880 Relative Write Latency: 0 00:07:30.880 Idle Power: Not Reported 00:07:30.880 Active Power: Not Reported 00:07:30.880 Non-Operational Permissive Mode: Not Supported 00:07:30.880 00:07:30.880 Health Information 00:07:30.880 ================== 00:07:30.880 Critical Warnings: 00:07:30.880 Available Spare Space: OK 00:07:30.880 Temperature: OK 00:07:30.880 Device Reliability: OK 00:07:30.880 Read Only: No 00:07:30.880 Volatile Memory Backup: OK 00:07:30.880 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.880 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.880 Available Spare: 0% 00:07:30.880 Available Spare Threshold: 0% 00:07:30.880 Life Percentage Used: 0% 00:07:30.880 Data Units Read: 1217 00:07:30.880 Data Units Written: 1146 00:07:30.880 Host Read Commands: 44266 00:07:30.880 Host Write Commands: 43690 00:07:30.880 Controller Busy Time: 0 minutes 00:07:30.880 Power Cycles: 0 00:07:30.881 Power On Hours: 0 hours 00:07:30.881 Unsafe Shutdowns: 0 00:07:30.881 Unrecoverable Media Errors: 0 00:07:30.881 Lifetime Error Log Entries: 0 00:07:30.881 Warning Temperature Time: 0 minutes 00:07:30.881 Critical Temperature Time: 0 minutes 00:07:30.881 00:07:30.881 Number of Queues 00:07:30.881 ================ 00:07:30.881 Number of I/O Submission Queues: 64 00:07:30.881 Number of I/O Completion Queues: 64 00:07:30.881 00:07:30.881 ZNS Specific Controller Data 00:07:30.881 ============================ 00:07:30.881 Zone Append Size Limit: 0 00:07:30.881 00:07:30.881 00:07:30.881 Active Namespaces 00:07:30.881 ================= 00:07:30.881 Namespace ID:1 00:07:30.881 Error Recovery Timeout: Unlimited 00:07:30.881 Command Set Identifier: NVM (00h) 00:07:30.881 Deallocate: Supported 00:07:30.881 Deallocated/Unwritten Error: Supported 00:07:30.881 Deallocated Read Value: All 0x00 00:07:30.881 Deallocate in Write Zeroes: Not Supported 00:07:30.881 Deallocated Guard Field: 0xFFFF 00:07:30.881 Flush: Supported 00:07:30.881 Reservation: Not Supported 00:07:30.881 Namespace Sharing Capabilities: Multiple Controllers 00:07:30.881 Size (in LBAs): 262144 (1GiB) 00:07:30.881 Capacity (in LBAs): 262144 (1GiB) 00:07:30.881 Utilization (in LBAs): 262144 (1GiB) 00:07:30.881 Thin Provisioning: Not Supported 00:07:30.881 Per-NS Atomic Units: No 00:07:30.881 Maximum Single Source Range Length: 128 00:07:30.881 Maximum Copy Length: 128 00:07:30.881 Maximum Source Range Count: 128 00:07:30.881 NGUID/EUI64 Never Reused: No 00:07:30.881 Namespace Write Protected: No 00:07:30.881 Endurance group ID: 1 00:07:30.881 Number of LBA Formats: 8 00:07:30.881 Current LBA Format: LBA Format #04 00:07:30.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.881 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.881 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.881 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.881 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.881 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.881 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.881 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.881 00:07:30.881 Get Feature FDP: 00:07:30.881 ================ 00:07:30.881 Enabled: Yes 00:07:30.881 FDP configuration index: 0 00:07:30.881 00:07:30.881 FDP configurations log page 00:07:30.881 =========================== 00:07:30.881 Number of FDP configurations: 1 00:07:30.881 Version: 0 00:07:30.881 Size: 112 00:07:30.881 FDP Configuration Descriptor: 0 00:07:30.881 Descriptor Size: 96 00:07:30.881 Reclaim Group Identifier format: 2 00:07:30.881 FDP Volatile Write Cache: Not Present 00:07:30.881 FDP Configuration: Valid 00:07:30.881 Vendor Specific Size: 0 00:07:30.881 Number of Reclaim Groups: 2 00:07:30.881 Number of Recalim Unit Handles: 8 00:07:30.881 Max Placement Identifiers: 128 00:07:30.881 Number of Namespaces Suppprted: 256 00:07:30.881 Reclaim unit Nominal Size: 6000000 bytes 00:07:30.881 Estimated Reclaim Unit Time Limit: Not Reported 00:07:30.881 RUH Desc #000: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #001: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #002: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #003: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #004: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #005: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #006: RUH Type: Initially Isolated 00:07:30.881 RUH Desc #007: RUH Type: Initially Isolated 00:07:30.881 00:07:30.881 FDP reclaim unit handle usage log page 00:07:30.881 ====================================== 00:07:30.881 Number of Reclaim Unit Handles: 8 00:07:30.881 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:30.881 RUH Usage Desc #001: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #002: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #003: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #004: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #005: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #006: RUH Attributes: Unused 00:07:30.881 RUH Usage Desc #007: RUH Attributes: Unused 00:07:30.881 00:07:30.881 FDP statistics log page 00:07:30.881 ======================= 00:07:30.881 Host bytes with metadata written: 688300032 00:07:30.881 Me[2024-12-07 17:24:04.188867] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62818 terminated unexpected 00:07:30.881 dia bytes with metadata written: 688390144 00:07:30.881 Media bytes erased: 0 00:07:30.881 00:07:30.881 FDP events log page 00:07:30.881 =================== 00:07:30.881 Number of FDP events: 0 00:07:30.881 00:07:30.881 NVM Specific Namespace Data 00:07:30.881 =========================== 00:07:30.881 Logical Block Storage Tag Mask: 0 00:07:30.881 Protection Information Capabilities: 00:07:30.881 16b Guard Protection Information Storage Tag Support: No 00:07:30.881 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.881 Storage Tag Check Read Support: No 00:07:30.881 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.881 ===================================================== 00:07:30.881 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:30.881 ===================================================== 00:07:30.881 Controller Capabilities/Features 00:07:30.881 ================================ 00:07:30.881 Vendor ID: 1b36 00:07:30.881 Subsystem Vendor ID: 1af4 00:07:30.881 Serial Number: 12342 00:07:30.881 Model Number: QEMU NVMe Ctrl 00:07:30.881 Firmware Version: 8.0.0 00:07:30.881 Recommended Arb Burst: 6 00:07:30.881 IEEE OUI Identifier: 00 54 52 00:07:30.881 Multi-path I/O 00:07:30.881 May have multiple subsystem ports: No 00:07:30.881 May have multiple controllers: No 00:07:30.881 Associated with SR-IOV VF: No 00:07:30.881 Max Data Transfer Size: 524288 00:07:30.881 Max Number of Namespaces: 256 00:07:30.881 Max Number of I/O Queues: 64 00:07:30.881 NVMe Specification Version (VS): 1.4 00:07:30.881 NVMe Specification Version (Identify): 1.4 00:07:30.881 Maximum Queue Entries: 2048 00:07:30.881 Contiguous Queues Required: Yes 00:07:30.881 Arbitration Mechanisms Supported 00:07:30.881 Weighted Round Robin: Not Supported 00:07:30.881 Vendor Specific: Not Supported 00:07:30.881 Reset Timeout: 7500 ms 00:07:30.881 Doorbell Stride: 4 bytes 00:07:30.881 NVM Subsystem Reset: Not Supported 00:07:30.881 Command Sets Supported 00:07:30.881 NVM Command Set: Supported 00:07:30.881 Boot Partition: Not Supported 00:07:30.881 Memory Page Size Minimum: 4096 bytes 00:07:30.881 Memory Page Size Maximum: 65536 bytes 00:07:30.881 Persistent Memory Region: Not Supported 00:07:30.881 Optional Asynchronous Events Supported 00:07:30.881 Namespace Attribute Notices: Supported 00:07:30.881 Firmware Activation Notices: Not Supported 00:07:30.881 ANA Change Notices: Not Supported 00:07:30.881 PLE Aggregate Log Change Notices: Not Supported 00:07:30.881 LBA Status Info Alert Notices: Not Supported 00:07:30.881 EGE Aggregate Log Change Notices: Not Supported 00:07:30.882 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.882 Zone Descriptor Change Notices: Not Supported 00:07:30.882 Discovery Log Change Notices: Not Supported 00:07:30.882 Controller Attributes 00:07:30.882 128-bit Host Identifier: Not Supported 00:07:30.882 Non-Operational Permissive Mode: Not Supported 00:07:30.882 NVM Sets: Not Supported 00:07:30.882 Read Recovery Levels: Not Supported 00:07:30.882 Endurance Groups: Not Supported 00:07:30.882 Predictable Latency Mode: Not Supported 00:07:30.882 Traffic Based Keep ALive: Not Supported 00:07:30.882 Namespace Granularity: Not Supported 00:07:30.882 SQ Associations: Not Supported 00:07:30.882 UUID List: Not Supported 00:07:30.882 Multi-Domain Subsystem: Not Supported 00:07:30.882 Fixed Capacity Management: Not Supported 00:07:30.882 Variable Capacity Management: Not Supported 00:07:30.882 Delete Endurance Group: Not Supported 00:07:30.882 Delete NVM Set: Not Supported 00:07:30.882 Extended LBA Formats Supported: Supported 00:07:30.882 Flexible Data Placement Supported: Not Supported 00:07:30.882 00:07:30.882 Controller Memory Buffer Support 00:07:30.882 ================================ 00:07:30.882 Supported: No 00:07:30.882 00:07:30.882 Persistent Memory Region Support 00:07:30.882 ================================ 00:07:30.882 Supported: No 00:07:30.882 00:07:30.882 Admin Command Set Attributes 00:07:30.882 ============================ 00:07:30.882 Security Send/Receive: Not Supported 00:07:30.882 Format NVM: Supported 00:07:30.882 Firmware Activate/Download: Not Supported 00:07:30.882 Namespace Management: Supported 00:07:30.882 Device Self-Test: Not Supported 00:07:30.882 Directives: Supported 00:07:30.882 NVMe-MI: Not Supported 00:07:30.882 Virtualization Management: Not Supported 00:07:30.882 Doorbell Buffer Config: Supported 00:07:30.882 Get LBA Status Capability: Not Supported 00:07:30.882 Command & Feature Lockdown Capability: Not Supported 00:07:30.882 Abort Command Limit: 4 00:07:30.882 Async Event Request Limit: 4 00:07:30.882 Number of Firmware Slots: N/A 00:07:30.882 Firmware Slot 1 Read-Only: N/A 00:07:30.882 Firmware Activation Without Reset: N/A 00:07:30.882 Multiple Update Detection Support: N/A 00:07:30.882 Firmware Update Granularity: No Information Provided 00:07:30.882 Per-Namespace SMART Log: Yes 00:07:30.882 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.882 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:30.882 Command Effects Log Page: Supported 00:07:30.882 Get Log Page Extended Data: Supported 00:07:30.882 Telemetry Log Pages: Not Supported 00:07:30.882 Persistent Event Log Pages: Not Supported 00:07:30.882 Supported Log Pages Log Page: May Support 00:07:30.882 Commands Supported & Effects Log Page: Not Supported 00:07:30.882 Feature Identifiers & Effects Log Page:May Support 00:07:30.882 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.882 Data Area 4 for Telemetry Log: Not Supported 00:07:30.882 Error Log Page Entries Supported: 1 00:07:30.882 Keep Alive: Not Supported 00:07:30.882 00:07:30.882 NVM Command Set Attributes 00:07:30.882 ========================== 00:07:30.882 Submission Queue Entry Size 00:07:30.882 Max: 64 00:07:30.882 Min: 64 00:07:30.882 Completion Queue Entry Size 00:07:30.882 Max: 16 00:07:30.882 Min: 16 00:07:30.882 Number of Namespaces: 256 00:07:30.882 Compare Command: Supported 00:07:30.882 Write Uncorrectable Command: Not Supported 00:07:30.882 Dataset Management Command: Supported 00:07:30.882 Write Zeroes Command: Supported 00:07:30.882 Set Features Save Field: Supported 00:07:30.882 Reservations: Not Supported 00:07:30.882 Timestamp: Supported 00:07:30.882 Copy: Supported 00:07:30.882 Volatile Write Cache: Present 00:07:30.882 Atomic Write Unit (Normal): 1 00:07:30.882 Atomic Write Unit (PFail): 1 00:07:30.882 Atomic Compare & Write Unit: 1 00:07:30.882 Fused Compare & Write: Not Supported 00:07:30.882 Scatter-Gather List 00:07:30.882 SGL Command Set: Supported 00:07:30.882 SGL Keyed: Not Supported 00:07:30.882 SGL Bit Bucket Descriptor: Not Supported 00:07:30.882 SGL Metadata Pointer: Not Supported 00:07:30.882 Oversized SGL: Not Supported 00:07:30.882 SGL Metadata Address: Not Supported 00:07:30.882 SGL Offset: Not Supported 00:07:30.882 Transport SGL Data Block: Not Supported 00:07:30.882 Replay Protected Memory Block: Not Supported 00:07:30.882 00:07:30.882 Firmware Slot Information 00:07:30.882 ========================= 00:07:30.882 Active slot: 1 00:07:30.882 Slot 1 Firmware Revision: 1.0 00:07:30.882 00:07:30.882 00:07:30.882 Commands Supported and Effects 00:07:30.882 ============================== 00:07:30.882 Admin Commands 00:07:30.882 -------------- 00:07:30.882 Delete I/O Submission Queue (00h): Supported 00:07:30.882 Create I/O Submission Queue (01h): Supported 00:07:30.882 Get Log Page (02h): Supported 00:07:30.882 Delete I/O Completion Queue (04h): Supported 00:07:30.882 Create I/O Completion Queue (05h): Supported 00:07:30.882 Identify (06h): Supported 00:07:30.882 Abort (08h): Supported 00:07:30.882 Set Features (09h): Supported 00:07:30.882 Get Features (0Ah): Supported 00:07:30.882 Asynchronous Event Request (0Ch): Supported 00:07:30.882 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.882 Directive Send (19h): Supported 00:07:30.882 Directive Receive (1Ah): Supported 00:07:30.882 Virtualization Management (1Ch): Supported 00:07:30.882 Doorbell Buffer Config (7Ch): Supported 00:07:30.882 Format NVM (80h): Supported LBA-Change 00:07:30.882 I/O Commands 00:07:30.882 ------------ 00:07:30.882 Flush (00h): Supported LBA-Change 00:07:30.882 Write (01h): Supported LBA-Change 00:07:30.882 Read (02h): Supported 00:07:30.882 Compare (05h): Supported 00:07:30.882 Write Zeroes (08h): Supported LBA-Change 00:07:30.882 Dataset Management (09h): Supported LBA-Change 00:07:30.882 Unknown (0Ch): Supported 00:07:30.882 Unknown (12h): Supported 00:07:30.882 Copy (19h): Supported LBA-Change 00:07:30.882 Unknown (1Dh): Supported LBA-Change 00:07:30.882 00:07:30.882 Error Log 00:07:30.882 ========= 00:07:30.882 00:07:30.882 Arbitration 00:07:30.882 =========== 00:07:30.882 Arbitration Burst: no limit 00:07:30.882 00:07:30.882 Power Management 00:07:30.882 ================ 00:07:30.882 Number of Power States: 1 00:07:30.882 Current Power State: Power State #0 00:07:30.882 Power State #0: 00:07:30.882 Max Power: 25.00 W 00:07:30.882 Non-Operational State: Operational 00:07:30.882 Entry Latency: 16 microseconds 00:07:30.882 Exit Latency: 4 microseconds 00:07:30.882 Relative Read Throughput: 0 00:07:30.882 Relative Read Latency: 0 00:07:30.882 Relative Write Throughput: 0 00:07:30.882 Relative Write Latency: 0 00:07:30.882 Idle Power: Not Reported 00:07:30.882 Active Power: Not Reported 00:07:30.882 Non-Operational Permissive Mode: Not Supported 00:07:30.882 00:07:30.882 Health Information 00:07:30.882 ================== 00:07:30.882 Critical Warnings: 00:07:30.882 Available Spare Space: OK 00:07:30.882 Temperature: OK 00:07:30.882 Device Reliability: OK 00:07:30.882 Read Only: No 00:07:30.882 Volatile Memory Backup: OK 00:07:30.882 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.882 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.882 Available Spare: 0% 00:07:30.882 Available Spare Threshold: 0% 00:07:30.882 Life Percentage Used: 0% 00:07:30.882 Data Units Read: 2518 00:07:30.882 Data Units Written: 2305 00:07:30.882 Host Read Commands: 123293 00:07:30.882 Host Write Commands: 121563 00:07:30.882 Controller Busy Time: 0 minutes 00:07:30.882 Power Cycles: 0 00:07:30.882 Power On Hours: 0 hours 00:07:30.882 Unsafe Shutdowns: 0 00:07:30.882 Unrecoverable Media Errors: 0 00:07:30.882 Lifetime Error Log Entries: 0 00:07:30.882 Warning Temperature Time: 0 minutes 00:07:30.882 Critical Temperature Time: 0 minutes 00:07:30.882 00:07:30.882 Number of Queues 00:07:30.882 ================ 00:07:30.882 Number of I/O Submission Queues: 64 00:07:30.882 Number of I/O Completion Queues: 64 00:07:30.882 00:07:30.882 ZNS Specific Controller Data 00:07:30.882 ============================ 00:07:30.882 Zone Append Size Limit: 0 00:07:30.882 00:07:30.882 00:07:30.882 Active Namespaces 00:07:30.882 ================= 00:07:30.882 Namespace ID:1 00:07:30.882 Error Recovery Timeout: Unlimited 00:07:30.882 Command Set Identifier: NVM (00h) 00:07:30.882 Deallocate: Supported 00:07:30.882 Deallocated/Unwritten Error: Supported 00:07:30.882 Deallocated Read Value: All 0x00 00:07:30.882 Deallocate in Write Zeroes: Not Supported 00:07:30.883 Deallocated Guard Field: 0xFFFF 00:07:30.883 Flush: Supported 00:07:30.883 Reservation: Not Supported 00:07:30.883 Namespace Sharing Capabilities: Private 00:07:30.883 Size (in LBAs): 1048576 (4GiB) 00:07:30.883 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.883 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.883 Thin Provisioning: Not Supported 00:07:30.883 Per-NS Atomic Units: No 00:07:30.883 Maximum Single Source Range Length: 128 00:07:30.883 Maximum Copy Length: 128 00:07:30.883 Maximum Source Range Count: 128 00:07:30.883 NGUID/EUI64 Never Reused: No 00:07:30.883 Namespace Write Protected: No 00:07:30.883 Number of LBA Formats: 8 00:07:30.883 Current LBA Format: LBA Format #04 00:07:30.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.883 00:07:30.883 NVM Specific Namespace Data 00:07:30.883 =========================== 00:07:30.883 Logical Block Storage Tag Mask: 0 00:07:30.883 Protection Information Capabilities: 00:07:30.883 16b Guard Protection Information Storage Tag Support: No 00:07:30.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.883 Storage Tag Check Read Support: No 00:07:30.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Namespace ID:2 00:07:30.883 Error Recovery Timeout: Unlimited 00:07:30.883 Command Set Identifier: NVM (00h) 00:07:30.883 Deallocate: Supported 00:07:30.883 Deallocated/Unwritten Error: Supported 00:07:30.883 Deallocated Read Value: All 0x00 00:07:30.883 Deallocate in Write Zeroes: Not Supported 00:07:30.883 Deallocated Guard Field: 0xFFFF 00:07:30.883 Flush: Supported 00:07:30.883 Reservation: Not Supported 00:07:30.883 Namespace Sharing Capabilities: Private 00:07:30.883 Size (in LBAs): 1048576 (4GiB) 00:07:30.883 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.883 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.883 Thin Provisioning: Not Supported 00:07:30.883 Per-NS Atomic Units: No 00:07:30.883 Maximum Single Source Range Length: 128 00:07:30.883 Maximum Copy Length: 128 00:07:30.883 Maximum Source Range Count: 128 00:07:30.883 NGUID/EUI64 Never Reused: No 00:07:30.883 Namespace Write Protected: No 00:07:30.883 Number of LBA Formats: 8 00:07:30.883 Current LBA Format: LBA Format #04 00:07:30.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.883 00:07:30.883 NVM Specific Namespace Data 00:07:30.883 =========================== 00:07:30.883 Logical Block Storage Tag Mask: 0 00:07:30.883 Protection Information Capabilities: 00:07:30.883 16b Guard Protection Information Storage Tag Support: No 00:07:30.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.883 Storage Tag Check Read Support: No 00:07:30.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Namespace ID:3 00:07:30.883 Error Recovery Timeout: Unlimited 00:07:30.883 Command Set Identifier: NVM (00h) 00:07:30.883 Deallocate: Supported 00:07:30.883 Deallocated/Unwritten Error: Supported 00:07:30.883 Deallocated Read Value: All 0x00 00:07:30.883 Deallocate in Write Zeroes: Not Supported 00:07:30.883 Deallocated Guard Field: 0xFFFF 00:07:30.883 Flush: Supported 00:07:30.883 Reservation: Not Supported 00:07:30.883 Namespace Sharing Capabilities: Private 00:07:30.883 Size (in LBAs): 1048576 (4GiB) 00:07:30.883 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.883 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.883 Thin Provisioning: Not Supported 00:07:30.883 Per-NS Atomic Units: No 00:07:30.883 Maximum Single Source Range Length: 128 00:07:30.883 Maximum Copy Length: 128 00:07:30.883 Maximum Source Range Count: 128 00:07:30.883 NGUID/EUI64 Never Reused: No 00:07:30.883 Namespace Write Protected: No 00:07:30.883 Number of LBA Formats: 8 00:07:30.883 Current LBA Format: LBA Format #04 00:07:30.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.883 00:07:30.883 NVM Specific Namespace Data 00:07:30.883 =========================== 00:07:30.883 Logical Block Storage Tag Mask: 0 00:07:30.883 Protection Information Capabilities: 00:07:30.883 16b Guard Protection Information Storage Tag Support: No 00:07:30.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.883 Storage Tag Check Read Support: No 00:07:30.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.883 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:30.883 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:31.145 ===================================================== 00:07:31.145 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:31.145 ===================================================== 00:07:31.145 Controller Capabilities/Features 00:07:31.145 ================================ 00:07:31.145 Vendor ID: 1b36 00:07:31.145 Subsystem Vendor ID: 1af4 00:07:31.145 Serial Number: 12340 00:07:31.145 Model Number: QEMU NVMe Ctrl 00:07:31.145 Firmware Version: 8.0.0 00:07:31.145 Recommended Arb Burst: 6 00:07:31.145 IEEE OUI Identifier: 00 54 52 00:07:31.145 Multi-path I/O 00:07:31.145 May have multiple subsystem ports: No 00:07:31.145 May have multiple controllers: No 00:07:31.145 Associated with SR-IOV VF: No 00:07:31.145 Max Data Transfer Size: 524288 00:07:31.145 Max Number of Namespaces: 256 00:07:31.145 Max Number of I/O Queues: 64 00:07:31.145 NVMe Specification Version (VS): 1.4 00:07:31.145 NVMe Specification Version (Identify): 1.4 00:07:31.145 Maximum Queue Entries: 2048 00:07:31.145 Contiguous Queues Required: Yes 00:07:31.145 Arbitration Mechanisms Supported 00:07:31.145 Weighted Round Robin: Not Supported 00:07:31.145 Vendor Specific: Not Supported 00:07:31.145 Reset Timeout: 7500 ms 00:07:31.145 Doorbell Stride: 4 bytes 00:07:31.145 NVM Subsystem Reset: Not Supported 00:07:31.145 Command Sets Supported 00:07:31.145 NVM Command Set: Supported 00:07:31.145 Boot Partition: Not Supported 00:07:31.145 Memory Page Size Minimum: 4096 bytes 00:07:31.145 Memory Page Size Maximum: 65536 bytes 00:07:31.145 Persistent Memory Region: Not Supported 00:07:31.145 Optional Asynchronous Events Supported 00:07:31.145 Namespace Attribute Notices: Supported 00:07:31.145 Firmware Activation Notices: Not Supported 00:07:31.145 ANA Change Notices: Not Supported 00:07:31.145 PLE Aggregate Log Change Notices: Not Supported 00:07:31.145 LBA Status Info Alert Notices: Not Supported 00:07:31.145 EGE Aggregate Log Change Notices: Not Supported 00:07:31.145 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.145 Zone Descriptor Change Notices: Not Supported 00:07:31.145 Discovery Log Change Notices: Not Supported 00:07:31.145 Controller Attributes 00:07:31.145 128-bit Host Identifier: Not Supported 00:07:31.145 Non-Operational Permissive Mode: Not Supported 00:07:31.145 NVM Sets: Not Supported 00:07:31.145 Read Recovery Levels: Not Supported 00:07:31.145 Endurance Groups: Not Supported 00:07:31.145 Predictable Latency Mode: Not Supported 00:07:31.145 Traffic Based Keep ALive: Not Supported 00:07:31.145 Namespace Granularity: Not Supported 00:07:31.145 SQ Associations: Not Supported 00:07:31.145 UUID List: Not Supported 00:07:31.145 Multi-Domain Subsystem: Not Supported 00:07:31.145 Fixed Capacity Management: Not Supported 00:07:31.145 Variable Capacity Management: Not Supported 00:07:31.145 Delete Endurance Group: Not Supported 00:07:31.145 Delete NVM Set: Not Supported 00:07:31.145 Extended LBA Formats Supported: Supported 00:07:31.145 Flexible Data Placement Supported: Not Supported 00:07:31.145 00:07:31.145 Controller Memory Buffer Support 00:07:31.145 ================================ 00:07:31.145 Supported: No 00:07:31.145 00:07:31.145 Persistent Memory Region Support 00:07:31.145 ================================ 00:07:31.145 Supported: No 00:07:31.145 00:07:31.145 Admin Command Set Attributes 00:07:31.145 ============================ 00:07:31.145 Security Send/Receive: Not Supported 00:07:31.145 Format NVM: Supported 00:07:31.145 Firmware Activate/Download: Not Supported 00:07:31.145 Namespace Management: Supported 00:07:31.145 Device Self-Test: Not Supported 00:07:31.145 Directives: Supported 00:07:31.145 NVMe-MI: Not Supported 00:07:31.145 Virtualization Management: Not Supported 00:07:31.145 Doorbell Buffer Config: Supported 00:07:31.145 Get LBA Status Capability: Not Supported 00:07:31.145 Command & Feature Lockdown Capability: Not Supported 00:07:31.145 Abort Command Limit: 4 00:07:31.145 Async Event Request Limit: 4 00:07:31.145 Number of Firmware Slots: N/A 00:07:31.145 Firmware Slot 1 Read-Only: N/A 00:07:31.145 Firmware Activation Without Reset: N/A 00:07:31.145 Multiple Update Detection Support: N/A 00:07:31.145 Firmware Update Granularity: No Information Provided 00:07:31.145 Per-Namespace SMART Log: Yes 00:07:31.145 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.145 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:31.145 Command Effects Log Page: Supported 00:07:31.145 Get Log Page Extended Data: Supported 00:07:31.145 Telemetry Log Pages: Not Supported 00:07:31.145 Persistent Event Log Pages: Not Supported 00:07:31.145 Supported Log Pages Log Page: May Support 00:07:31.145 Commands Supported & Effects Log Page: Not Supported 00:07:31.145 Feature Identifiers & Effects Log Page:May Support 00:07:31.145 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.145 Data Area 4 for Telemetry Log: Not Supported 00:07:31.145 Error Log Page Entries Supported: 1 00:07:31.145 Keep Alive: Not Supported 00:07:31.145 00:07:31.145 NVM Command Set Attributes 00:07:31.145 ========================== 00:07:31.145 Submission Queue Entry Size 00:07:31.145 Max: 64 00:07:31.145 Min: 64 00:07:31.145 Completion Queue Entry Size 00:07:31.145 Max: 16 00:07:31.145 Min: 16 00:07:31.145 Number of Namespaces: 256 00:07:31.145 Compare Command: Supported 00:07:31.145 Write Uncorrectable Command: Not Supported 00:07:31.145 Dataset Management Command: Supported 00:07:31.145 Write Zeroes Command: Supported 00:07:31.145 Set Features Save Field: Supported 00:07:31.145 Reservations: Not Supported 00:07:31.145 Timestamp: Supported 00:07:31.145 Copy: Supported 00:07:31.146 Volatile Write Cache: Present 00:07:31.146 Atomic Write Unit (Normal): 1 00:07:31.146 Atomic Write Unit (PFail): 1 00:07:31.146 Atomic Compare & Write Unit: 1 00:07:31.146 Fused Compare & Write: Not Supported 00:07:31.146 Scatter-Gather List 00:07:31.146 SGL Command Set: Supported 00:07:31.146 SGL Keyed: Not Supported 00:07:31.146 SGL Bit Bucket Descriptor: Not Supported 00:07:31.146 SGL Metadata Pointer: Not Supported 00:07:31.146 Oversized SGL: Not Supported 00:07:31.146 SGL Metadata Address: Not Supported 00:07:31.146 SGL Offset: Not Supported 00:07:31.146 Transport SGL Data Block: Not Supported 00:07:31.146 Replay Protected Memory Block: Not Supported 00:07:31.146 00:07:31.146 Firmware Slot Information 00:07:31.146 ========================= 00:07:31.146 Active slot: 1 00:07:31.146 Slot 1 Firmware Revision: 1.0 00:07:31.146 00:07:31.146 00:07:31.146 Commands Supported and Effects 00:07:31.146 ============================== 00:07:31.146 Admin Commands 00:07:31.146 -------------- 00:07:31.146 Delete I/O Submission Queue (00h): Supported 00:07:31.146 Create I/O Submission Queue (01h): Supported 00:07:31.146 Get Log Page (02h): Supported 00:07:31.146 Delete I/O Completion Queue (04h): Supported 00:07:31.146 Create I/O Completion Queue (05h): Supported 00:07:31.146 Identify (06h): Supported 00:07:31.146 Abort (08h): Supported 00:07:31.146 Set Features (09h): Supported 00:07:31.146 Get Features (0Ah): Supported 00:07:31.146 Asynchronous Event Request (0Ch): Supported 00:07:31.146 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.146 Directive Send (19h): Supported 00:07:31.146 Directive Receive (1Ah): Supported 00:07:31.146 Virtualization Management (1Ch): Supported 00:07:31.146 Doorbell Buffer Config (7Ch): Supported 00:07:31.146 Format NVM (80h): Supported LBA-Change 00:07:31.146 I/O Commands 00:07:31.146 ------------ 00:07:31.146 Flush (00h): Supported LBA-Change 00:07:31.146 Write (01h): Supported LBA-Change 00:07:31.146 Read (02h): Supported 00:07:31.146 Compare (05h): Supported 00:07:31.146 Write Zeroes (08h): Supported LBA-Change 00:07:31.146 Dataset Management (09h): Supported LBA-Change 00:07:31.146 Unknown (0Ch): Supported 00:07:31.146 Unknown (12h): Supported 00:07:31.146 Copy (19h): Supported LBA-Change 00:07:31.146 Unknown (1Dh): Supported LBA-Change 00:07:31.146 00:07:31.146 Error Log 00:07:31.146 ========= 00:07:31.146 00:07:31.146 Arbitration 00:07:31.146 =========== 00:07:31.146 Arbitration Burst: no limit 00:07:31.146 00:07:31.146 Power Management 00:07:31.146 ================ 00:07:31.146 Number of Power States: 1 00:07:31.146 Current Power State: Power State #0 00:07:31.146 Power State #0: 00:07:31.146 Max Power: 25.00 W 00:07:31.146 Non-Operational State: Operational 00:07:31.146 Entry Latency: 16 microseconds 00:07:31.146 Exit Latency: 4 microseconds 00:07:31.146 Relative Read Throughput: 0 00:07:31.146 Relative Read Latency: 0 00:07:31.146 Relative Write Throughput: 0 00:07:31.146 Relative Write Latency: 0 00:07:31.146 Idle Power: Not Reported 00:07:31.146 Active Power: Not Reported 00:07:31.146 Non-Operational Permissive Mode: Not Supported 00:07:31.146 00:07:31.146 Health Information 00:07:31.146 ================== 00:07:31.146 Critical Warnings: 00:07:31.146 Available Spare Space: OK 00:07:31.146 Temperature: OK 00:07:31.146 Device Reliability: OK 00:07:31.146 Read Only: No 00:07:31.146 Volatile Memory Backup: OK 00:07:31.146 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.146 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.146 Available Spare: 0% 00:07:31.146 Available Spare Threshold: 0% 00:07:31.146 Life Percentage Used: 0% 00:07:31.146 Data Units Read: 684 00:07:31.146 Data Units Written: 612 00:07:31.146 Host Read Commands: 39564 00:07:31.146 Host Write Commands: 39350 00:07:31.146 Controller Busy Time: 0 minutes 00:07:31.146 Power Cycles: 0 00:07:31.146 Power On Hours: 0 hours 00:07:31.146 Unsafe Shutdowns: 0 00:07:31.146 Unrecoverable Media Errors: 0 00:07:31.146 Lifetime Error Log Entries: 0 00:07:31.146 Warning Temperature Time: 0 minutes 00:07:31.146 Critical Temperature Time: 0 minutes 00:07:31.146 00:07:31.146 Number of Queues 00:07:31.146 ================ 00:07:31.146 Number of I/O Submission Queues: 64 00:07:31.146 Number of I/O Completion Queues: 64 00:07:31.146 00:07:31.146 ZNS Specific Controller Data 00:07:31.146 ============================ 00:07:31.146 Zone Append Size Limit: 0 00:07:31.146 00:07:31.146 00:07:31.146 Active Namespaces 00:07:31.146 ================= 00:07:31.146 Namespace ID:1 00:07:31.146 Error Recovery Timeout: Unlimited 00:07:31.146 Command Set Identifier: NVM (00h) 00:07:31.146 Deallocate: Supported 00:07:31.146 Deallocated/Unwritten Error: Supported 00:07:31.146 Deallocated Read Value: All 0x00 00:07:31.146 Deallocate in Write Zeroes: Not Supported 00:07:31.146 Deallocated Guard Field: 0xFFFF 00:07:31.146 Flush: Supported 00:07:31.146 Reservation: Not Supported 00:07:31.146 Metadata Transferred as: Separate Metadata Buffer 00:07:31.146 Namespace Sharing Capabilities: Private 00:07:31.146 Size (in LBAs): 1548666 (5GiB) 00:07:31.146 Capacity (in LBAs): 1548666 (5GiB) 00:07:31.146 Utilization (in LBAs): 1548666 (5GiB) 00:07:31.146 Thin Provisioning: Not Supported 00:07:31.146 Per-NS Atomic Units: No 00:07:31.146 Maximum Single Source Range Length: 128 00:07:31.146 Maximum Copy Length: 128 00:07:31.146 Maximum Source Range Count: 128 00:07:31.146 NGUID/EUI64 Never Reused: No 00:07:31.146 Namespace Write Protected: No 00:07:31.146 Number of LBA Formats: 8 00:07:31.146 Current LBA Format: LBA Format #07 00:07:31.146 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.146 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.146 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.146 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.146 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.146 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.146 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.146 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.146 00:07:31.146 NVM Specific Namespace Data 00:07:31.146 =========================== 00:07:31.146 Logical Block Storage Tag Mask: 0 00:07:31.146 Protection Information Capabilities: 00:07:31.146 16b Guard Protection Information Storage Tag Support: No 00:07:31.146 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.146 Storage Tag Check Read Support: No 00:07:31.146 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.146 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.146 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:31.407 ===================================================== 00:07:31.407 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:31.407 ===================================================== 00:07:31.407 Controller Capabilities/Features 00:07:31.407 ================================ 00:07:31.407 Vendor ID: 1b36 00:07:31.407 Subsystem Vendor ID: 1af4 00:07:31.407 Serial Number: 12341 00:07:31.407 Model Number: QEMU NVMe Ctrl 00:07:31.407 Firmware Version: 8.0.0 00:07:31.407 Recommended Arb Burst: 6 00:07:31.407 IEEE OUI Identifier: 00 54 52 00:07:31.407 Multi-path I/O 00:07:31.407 May have multiple subsystem ports: No 00:07:31.407 May have multiple controllers: No 00:07:31.407 Associated with SR-IOV VF: No 00:07:31.407 Max Data Transfer Size: 524288 00:07:31.407 Max Number of Namespaces: 256 00:07:31.407 Max Number of I/O Queues: 64 00:07:31.407 NVMe Specification Version (VS): 1.4 00:07:31.407 NVMe Specification Version (Identify): 1.4 00:07:31.407 Maximum Queue Entries: 2048 00:07:31.407 Contiguous Queues Required: Yes 00:07:31.407 Arbitration Mechanisms Supported 00:07:31.407 Weighted Round Robin: Not Supported 00:07:31.407 Vendor Specific: Not Supported 00:07:31.407 Reset Timeout: 7500 ms 00:07:31.407 Doorbell Stride: 4 bytes 00:07:31.407 NVM Subsystem Reset: Not Supported 00:07:31.407 Command Sets Supported 00:07:31.407 NVM Command Set: Supported 00:07:31.407 Boot Partition: Not Supported 00:07:31.407 Memory Page Size Minimum: 4096 bytes 00:07:31.407 Memory Page Size Maximum: 65536 bytes 00:07:31.407 Persistent Memory Region: Not Supported 00:07:31.407 Optional Asynchronous Events Supported 00:07:31.407 Namespace Attribute Notices: Supported 00:07:31.407 Firmware Activation Notices: Not Supported 00:07:31.407 ANA Change Notices: Not Supported 00:07:31.407 PLE Aggregate Log Change Notices: Not Supported 00:07:31.407 LBA Status Info Alert Notices: Not Supported 00:07:31.407 EGE Aggregate Log Change Notices: Not Supported 00:07:31.407 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.407 Zone Descriptor Change Notices: Not Supported 00:07:31.407 Discovery Log Change Notices: Not Supported 00:07:31.407 Controller Attributes 00:07:31.407 128-bit Host Identifier: Not Supported 00:07:31.407 Non-Operational Permissive Mode: Not Supported 00:07:31.407 NVM Sets: Not Supported 00:07:31.407 Read Recovery Levels: Not Supported 00:07:31.407 Endurance Groups: Not Supported 00:07:31.408 Predictable Latency Mode: Not Supported 00:07:31.408 Traffic Based Keep ALive: Not Supported 00:07:31.408 Namespace Granularity: Not Supported 00:07:31.408 SQ Associations: Not Supported 00:07:31.408 UUID List: Not Supported 00:07:31.408 Multi-Domain Subsystem: Not Supported 00:07:31.408 Fixed Capacity Management: Not Supported 00:07:31.408 Variable Capacity Management: Not Supported 00:07:31.408 Delete Endurance Group: Not Supported 00:07:31.408 Delete NVM Set: Not Supported 00:07:31.408 Extended LBA Formats Supported: Supported 00:07:31.408 Flexible Data Placement Supported: Not Supported 00:07:31.408 00:07:31.408 Controller Memory Buffer Support 00:07:31.408 ================================ 00:07:31.408 Supported: No 00:07:31.408 00:07:31.408 Persistent Memory Region Support 00:07:31.408 ================================ 00:07:31.408 Supported: No 00:07:31.408 00:07:31.408 Admin Command Set Attributes 00:07:31.408 ============================ 00:07:31.408 Security Send/Receive: Not Supported 00:07:31.408 Format NVM: Supported 00:07:31.408 Firmware Activate/Download: Not Supported 00:07:31.408 Namespace Management: Supported 00:07:31.408 Device Self-Test: Not Supported 00:07:31.408 Directives: Supported 00:07:31.408 NVMe-MI: Not Supported 00:07:31.408 Virtualization Management: Not Supported 00:07:31.408 Doorbell Buffer Config: Supported 00:07:31.408 Get LBA Status Capability: Not Supported 00:07:31.408 Command & Feature Lockdown Capability: Not Supported 00:07:31.408 Abort Command Limit: 4 00:07:31.408 Async Event Request Limit: 4 00:07:31.408 Number of Firmware Slots: N/A 00:07:31.408 Firmware Slot 1 Read-Only: N/A 00:07:31.408 Firmware Activation Without Reset: N/A 00:07:31.408 Multiple Update Detection Support: N/A 00:07:31.408 Firmware Update Granularity: No Information Provided 00:07:31.408 Per-Namespace SMART Log: Yes 00:07:31.408 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.408 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:31.408 Command Effects Log Page: Supported 00:07:31.408 Get Log Page Extended Data: Supported 00:07:31.408 Telemetry Log Pages: Not Supported 00:07:31.408 Persistent Event Log Pages: Not Supported 00:07:31.408 Supported Log Pages Log Page: May Support 00:07:31.408 Commands Supported & Effects Log Page: Not Supported 00:07:31.408 Feature Identifiers & Effects Log Page:May Support 00:07:31.408 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.408 Data Area 4 for Telemetry Log: Not Supported 00:07:31.408 Error Log Page Entries Supported: 1 00:07:31.408 Keep Alive: Not Supported 00:07:31.408 00:07:31.408 NVM Command Set Attributes 00:07:31.408 ========================== 00:07:31.408 Submission Queue Entry Size 00:07:31.408 Max: 64 00:07:31.408 Min: 64 00:07:31.408 Completion Queue Entry Size 00:07:31.408 Max: 16 00:07:31.408 Min: 16 00:07:31.408 Number of Namespaces: 256 00:07:31.408 Compare Command: Supported 00:07:31.408 Write Uncorrectable Command: Not Supported 00:07:31.408 Dataset Management Command: Supported 00:07:31.408 Write Zeroes Command: Supported 00:07:31.408 Set Features Save Field: Supported 00:07:31.408 Reservations: Not Supported 00:07:31.408 Timestamp: Supported 00:07:31.408 Copy: Supported 00:07:31.408 Volatile Write Cache: Present 00:07:31.408 Atomic Write Unit (Normal): 1 00:07:31.408 Atomic Write Unit (PFail): 1 00:07:31.408 Atomic Compare & Write Unit: 1 00:07:31.408 Fused Compare & Write: Not Supported 00:07:31.408 Scatter-Gather List 00:07:31.408 SGL Command Set: Supported 00:07:31.408 SGL Keyed: Not Supported 00:07:31.408 SGL Bit Bucket Descriptor: Not Supported 00:07:31.408 SGL Metadata Pointer: Not Supported 00:07:31.408 Oversized SGL: Not Supported 00:07:31.408 SGL Metadata Address: Not Supported 00:07:31.408 SGL Offset: Not Supported 00:07:31.408 Transport SGL Data Block: Not Supported 00:07:31.408 Replay Protected Memory Block: Not Supported 00:07:31.408 00:07:31.408 Firmware Slot Information 00:07:31.408 ========================= 00:07:31.408 Active slot: 1 00:07:31.408 Slot 1 Firmware Revision: 1.0 00:07:31.408 00:07:31.408 00:07:31.408 Commands Supported and Effects 00:07:31.408 ============================== 00:07:31.408 Admin Commands 00:07:31.408 -------------- 00:07:31.408 Delete I/O Submission Queue (00h): Supported 00:07:31.408 Create I/O Submission Queue (01h): Supported 00:07:31.408 Get Log Page (02h): Supported 00:07:31.408 Delete I/O Completion Queue (04h): Supported 00:07:31.408 Create I/O Completion Queue (05h): Supported 00:07:31.408 Identify (06h): Supported 00:07:31.408 Abort (08h): Supported 00:07:31.408 Set Features (09h): Supported 00:07:31.408 Get Features (0Ah): Supported 00:07:31.408 Asynchronous Event Request (0Ch): Supported 00:07:31.408 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.408 Directive Send (19h): Supported 00:07:31.408 Directive Receive (1Ah): Supported 00:07:31.408 Virtualization Management (1Ch): Supported 00:07:31.408 Doorbell Buffer Config (7Ch): Supported 00:07:31.408 Format NVM (80h): Supported LBA-Change 00:07:31.408 I/O Commands 00:07:31.408 ------------ 00:07:31.408 Flush (00h): Supported LBA-Change 00:07:31.408 Write (01h): Supported LBA-Change 00:07:31.408 Read (02h): Supported 00:07:31.408 Compare (05h): Supported 00:07:31.408 Write Zeroes (08h): Supported LBA-Change 00:07:31.408 Dataset Management (09h): Supported LBA-Change 00:07:31.408 Unknown (0Ch): Supported 00:07:31.408 Unknown (12h): Supported 00:07:31.408 Copy (19h): Supported LBA-Change 00:07:31.408 Unknown (1Dh): Supported LBA-Change 00:07:31.408 00:07:31.408 Error Log 00:07:31.408 ========= 00:07:31.408 00:07:31.408 Arbitration 00:07:31.408 =========== 00:07:31.408 Arbitration Burst: no limit 00:07:31.408 00:07:31.408 Power Management 00:07:31.408 ================ 00:07:31.408 Number of Power States: 1 00:07:31.408 Current Power State: Power State #0 00:07:31.408 Power State #0: 00:07:31.408 Max Power: 25.00 W 00:07:31.408 Non-Operational State: Operational 00:07:31.408 Entry Latency: 16 microseconds 00:07:31.408 Exit Latency: 4 microseconds 00:07:31.408 Relative Read Throughput: 0 00:07:31.408 Relative Read Latency: 0 00:07:31.408 Relative Write Throughput: 0 00:07:31.408 Relative Write Latency: 0 00:07:31.408 Idle Power: Not Reported 00:07:31.408 Active Power: Not Reported 00:07:31.408 Non-Operational Permissive Mode: Not Supported 00:07:31.408 00:07:31.408 Health Information 00:07:31.408 ================== 00:07:31.408 Critical Warnings: 00:07:31.408 Available Spare Space: OK 00:07:31.408 Temperature: OK 00:07:31.408 Device Reliability: OK 00:07:31.408 Read Only: No 00:07:31.408 Volatile Memory Backup: OK 00:07:31.408 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.408 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.408 Available Spare: 0% 00:07:31.408 Available Spare Threshold: 0% 00:07:31.408 Life Percentage Used: 0% 00:07:31.408 Data Units Read: 1090 00:07:31.408 Data Units Written: 951 00:07:31.408 Host Read Commands: 59084 00:07:31.408 Host Write Commands: 57780 00:07:31.408 Controller Busy Time: 0 minutes 00:07:31.408 Power Cycles: 0 00:07:31.408 Power On Hours: 0 hours 00:07:31.408 Unsafe Shutdowns: 0 00:07:31.408 Unrecoverable Media Errors: 0 00:07:31.408 Lifetime Error Log Entries: 0 00:07:31.408 Warning Temperature Time: 0 minutes 00:07:31.408 Critical Temperature Time: 0 minutes 00:07:31.408 00:07:31.408 Number of Queues 00:07:31.408 ================ 00:07:31.408 Number of I/O Submission Queues: 64 00:07:31.408 Number of I/O Completion Queues: 64 00:07:31.408 00:07:31.408 ZNS Specific Controller Data 00:07:31.408 ============================ 00:07:31.408 Zone Append Size Limit: 0 00:07:31.408 00:07:31.408 00:07:31.408 Active Namespaces 00:07:31.408 ================= 00:07:31.408 Namespace ID:1 00:07:31.408 Error Recovery Timeout: Unlimited 00:07:31.408 Command Set Identifier: NVM (00h) 00:07:31.408 Deallocate: Supported 00:07:31.408 Deallocated/Unwritten Error: Supported 00:07:31.408 Deallocated Read Value: All 0x00 00:07:31.408 Deallocate in Write Zeroes: Not Supported 00:07:31.408 Deallocated Guard Field: 0xFFFF 00:07:31.408 Flush: Supported 00:07:31.408 Reservation: Not Supported 00:07:31.408 Namespace Sharing Capabilities: Private 00:07:31.408 Size (in LBAs): 1310720 (5GiB) 00:07:31.408 Capacity (in LBAs): 1310720 (5GiB) 00:07:31.408 Utilization (in LBAs): 1310720 (5GiB) 00:07:31.408 Thin Provisioning: Not Supported 00:07:31.409 Per-NS Atomic Units: No 00:07:31.409 Maximum Single Source Range Length: 128 00:07:31.409 Maximum Copy Length: 128 00:07:31.409 Maximum Source Range Count: 128 00:07:31.409 NGUID/EUI64 Never Reused: No 00:07:31.409 Namespace Write Protected: No 00:07:31.409 Number of LBA Formats: 8 00:07:31.409 Current LBA Format: LBA Format #04 00:07:31.409 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.409 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.409 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.409 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.409 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.409 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.409 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.409 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.409 00:07:31.409 NVM Specific Namespace Data 00:07:31.409 =========================== 00:07:31.409 Logical Block Storage Tag Mask: 0 00:07:31.409 Protection Information Capabilities: 00:07:31.409 16b Guard Protection Information Storage Tag Support: No 00:07:31.409 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.409 Storage Tag Check Read Support: No 00:07:31.409 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.409 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.409 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:31.671 ===================================================== 00:07:31.671 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:31.671 ===================================================== 00:07:31.671 Controller Capabilities/Features 00:07:31.671 ================================ 00:07:31.671 Vendor ID: 1b36 00:07:31.671 Subsystem Vendor ID: 1af4 00:07:31.671 Serial Number: 12342 00:07:31.671 Model Number: QEMU NVMe Ctrl 00:07:31.671 Firmware Version: 8.0.0 00:07:31.671 Recommended Arb Burst: 6 00:07:31.671 IEEE OUI Identifier: 00 54 52 00:07:31.671 Multi-path I/O 00:07:31.671 May have multiple subsystem ports: No 00:07:31.671 May have multiple controllers: No 00:07:31.671 Associated with SR-IOV VF: No 00:07:31.671 Max Data Transfer Size: 524288 00:07:31.671 Max Number of Namespaces: 256 00:07:31.671 Max Number of I/O Queues: 64 00:07:31.671 NVMe Specification Version (VS): 1.4 00:07:31.671 NVMe Specification Version (Identify): 1.4 00:07:31.671 Maximum Queue Entries: 2048 00:07:31.671 Contiguous Queues Required: Yes 00:07:31.671 Arbitration Mechanisms Supported 00:07:31.671 Weighted Round Robin: Not Supported 00:07:31.671 Vendor Specific: Not Supported 00:07:31.671 Reset Timeout: 7500 ms 00:07:31.671 Doorbell Stride: 4 bytes 00:07:31.671 NVM Subsystem Reset: Not Supported 00:07:31.671 Command Sets Supported 00:07:31.671 NVM Command Set: Supported 00:07:31.671 Boot Partition: Not Supported 00:07:31.671 Memory Page Size Minimum: 4096 bytes 00:07:31.671 Memory Page Size Maximum: 65536 bytes 00:07:31.671 Persistent Memory Region: Not Supported 00:07:31.671 Optional Asynchronous Events Supported 00:07:31.671 Namespace Attribute Notices: Supported 00:07:31.671 Firmware Activation Notices: Not Supported 00:07:31.671 ANA Change Notices: Not Supported 00:07:31.671 PLE Aggregate Log Change Notices: Not Supported 00:07:31.671 LBA Status Info Alert Notices: Not Supported 00:07:31.671 EGE Aggregate Log Change Notices: Not Supported 00:07:31.671 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.671 Zone Descriptor Change Notices: Not Supported 00:07:31.671 Discovery Log Change Notices: Not Supported 00:07:31.671 Controller Attributes 00:07:31.671 128-bit Host Identifier: Not Supported 00:07:31.671 Non-Operational Permissive Mode: Not Supported 00:07:31.671 NVM Sets: Not Supported 00:07:31.671 Read Recovery Levels: Not Supported 00:07:31.671 Endurance Groups: Not Supported 00:07:31.671 Predictable Latency Mode: Not Supported 00:07:31.671 Traffic Based Keep ALive: Not Supported 00:07:31.671 Namespace Granularity: Not Supported 00:07:31.671 SQ Associations: Not Supported 00:07:31.671 UUID List: Not Supported 00:07:31.671 Multi-Domain Subsystem: Not Supported 00:07:31.671 Fixed Capacity Management: Not Supported 00:07:31.671 Variable Capacity Management: Not Supported 00:07:31.671 Delete Endurance Group: Not Supported 00:07:31.671 Delete NVM Set: Not Supported 00:07:31.671 Extended LBA Formats Supported: Supported 00:07:31.671 Flexible Data Placement Supported: Not Supported 00:07:31.671 00:07:31.671 Controller Memory Buffer Support 00:07:31.671 ================================ 00:07:31.671 Supported: No 00:07:31.671 00:07:31.671 Persistent Memory Region Support 00:07:31.671 ================================ 00:07:31.671 Supported: No 00:07:31.671 00:07:31.671 Admin Command Set Attributes 00:07:31.671 ============================ 00:07:31.671 Security Send/Receive: Not Supported 00:07:31.671 Format NVM: Supported 00:07:31.671 Firmware Activate/Download: Not Supported 00:07:31.671 Namespace Management: Supported 00:07:31.671 Device Self-Test: Not Supported 00:07:31.671 Directives: Supported 00:07:31.671 NVMe-MI: Not Supported 00:07:31.671 Virtualization Management: Not Supported 00:07:31.671 Doorbell Buffer Config: Supported 00:07:31.671 Get LBA Status Capability: Not Supported 00:07:31.671 Command & Feature Lockdown Capability: Not Supported 00:07:31.671 Abort Command Limit: 4 00:07:31.671 Async Event Request Limit: 4 00:07:31.671 Number of Firmware Slots: N/A 00:07:31.671 Firmware Slot 1 Read-Only: N/A 00:07:31.671 Firmware Activation Without Reset: N/A 00:07:31.671 Multiple Update Detection Support: N/A 00:07:31.671 Firmware Update Granularity: No Information Provided 00:07:31.671 Per-Namespace SMART Log: Yes 00:07:31.671 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.671 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:31.671 Command Effects Log Page: Supported 00:07:31.671 Get Log Page Extended Data: Supported 00:07:31.671 Telemetry Log Pages: Not Supported 00:07:31.671 Persistent Event Log Pages: Not Supported 00:07:31.671 Supported Log Pages Log Page: May Support 00:07:31.671 Commands Supported & Effects Log Page: Not Supported 00:07:31.671 Feature Identifiers & Effects Log Page:May Support 00:07:31.671 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.671 Data Area 4 for Telemetry Log: Not Supported 00:07:31.671 Error Log Page Entries Supported: 1 00:07:31.671 Keep Alive: Not Supported 00:07:31.671 00:07:31.671 NVM Command Set Attributes 00:07:31.671 ========================== 00:07:31.671 Submission Queue Entry Size 00:07:31.671 Max: 64 00:07:31.671 Min: 64 00:07:31.671 Completion Queue Entry Size 00:07:31.671 Max: 16 00:07:31.671 Min: 16 00:07:31.671 Number of Namespaces: 256 00:07:31.671 Compare Command: Supported 00:07:31.671 Write Uncorrectable Command: Not Supported 00:07:31.671 Dataset Management Command: Supported 00:07:31.671 Write Zeroes Command: Supported 00:07:31.671 Set Features Save Field: Supported 00:07:31.671 Reservations: Not Supported 00:07:31.671 Timestamp: Supported 00:07:31.671 Copy: Supported 00:07:31.671 Volatile Write Cache: Present 00:07:31.671 Atomic Write Unit (Normal): 1 00:07:31.671 Atomic Write Unit (PFail): 1 00:07:31.671 Atomic Compare & Write Unit: 1 00:07:31.671 Fused Compare & Write: Not Supported 00:07:31.671 Scatter-Gather List 00:07:31.671 SGL Command Set: Supported 00:07:31.671 SGL Keyed: Not Supported 00:07:31.671 SGL Bit Bucket Descriptor: Not Supported 00:07:31.671 SGL Metadata Pointer: Not Supported 00:07:31.671 Oversized SGL: Not Supported 00:07:31.671 SGL Metadata Address: Not Supported 00:07:31.671 SGL Offset: Not Supported 00:07:31.671 Transport SGL Data Block: Not Supported 00:07:31.671 Replay Protected Memory Block: Not Supported 00:07:31.671 00:07:31.671 Firmware Slot Information 00:07:31.671 ========================= 00:07:31.671 Active slot: 1 00:07:31.671 Slot 1 Firmware Revision: 1.0 00:07:31.671 00:07:31.671 00:07:31.671 Commands Supported and Effects 00:07:31.671 ============================== 00:07:31.671 Admin Commands 00:07:31.671 -------------- 00:07:31.671 Delete I/O Submission Queue (00h): Supported 00:07:31.671 Create I/O Submission Queue (01h): Supported 00:07:31.671 Get Log Page (02h): Supported 00:07:31.671 Delete I/O Completion Queue (04h): Supported 00:07:31.671 Create I/O Completion Queue (05h): Supported 00:07:31.671 Identify (06h): Supported 00:07:31.671 Abort (08h): Supported 00:07:31.671 Set Features (09h): Supported 00:07:31.671 Get Features (0Ah): Supported 00:07:31.671 Asynchronous Event Request (0Ch): Supported 00:07:31.671 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.671 Directive Send (19h): Supported 00:07:31.671 Directive Receive (1Ah): Supported 00:07:31.671 Virtualization Management (1Ch): Supported 00:07:31.671 Doorbell Buffer Config (7Ch): Supported 00:07:31.671 Format NVM (80h): Supported LBA-Change 00:07:31.671 I/O Commands 00:07:31.671 ------------ 00:07:31.671 Flush (00h): Supported LBA-Change 00:07:31.671 Write (01h): Supported LBA-Change 00:07:31.671 Read (02h): Supported 00:07:31.671 Compare (05h): Supported 00:07:31.671 Write Zeroes (08h): Supported LBA-Change 00:07:31.671 Dataset Management (09h): Supported LBA-Change 00:07:31.671 Unknown (0Ch): Supported 00:07:31.671 Unknown (12h): Supported 00:07:31.671 Copy (19h): Supported LBA-Change 00:07:31.671 Unknown (1Dh): Supported LBA-Change 00:07:31.671 00:07:31.671 Error Log 00:07:31.671 ========= 00:07:31.671 00:07:31.671 Arbitration 00:07:31.671 =========== 00:07:31.671 Arbitration Burst: no limit 00:07:31.671 00:07:31.671 Power Management 00:07:31.671 ================ 00:07:31.671 Number of Power States: 1 00:07:31.672 Current Power State: Power State #0 00:07:31.672 Power State #0: 00:07:31.672 Max Power: 25.00 W 00:07:31.672 Non-Operational State: Operational 00:07:31.672 Entry Latency: 16 microseconds 00:07:31.672 Exit Latency: 4 microseconds 00:07:31.672 Relative Read Throughput: 0 00:07:31.672 Relative Read Latency: 0 00:07:31.672 Relative Write Throughput: 0 00:07:31.672 Relative Write Latency: 0 00:07:31.672 Idle Power: Not Reported 00:07:31.672 Active Power: Not Reported 00:07:31.672 Non-Operational Permissive Mode: Not Supported 00:07:31.672 00:07:31.672 Health Information 00:07:31.672 ================== 00:07:31.672 Critical Warnings: 00:07:31.672 Available Spare Space: OK 00:07:31.672 Temperature: OK 00:07:31.672 Device Reliability: OK 00:07:31.672 Read Only: No 00:07:31.672 Volatile Memory Backup: OK 00:07:31.672 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.672 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.672 Available Spare: 0% 00:07:31.672 Available Spare Threshold: 0% 00:07:31.672 Life Percentage Used: 0% 00:07:31.672 Data Units Read: 2518 00:07:31.672 Data Units Written: 2305 00:07:31.672 Host Read Commands: 123293 00:07:31.672 Host Write Commands: 121563 00:07:31.672 Controller Busy Time: 0 minutes 00:07:31.672 Power Cycles: 0 00:07:31.672 Power On Hours: 0 hours 00:07:31.672 Unsafe Shutdowns: 0 00:07:31.672 Unrecoverable Media Errors: 0 00:07:31.672 Lifetime Error Log Entries: 0 00:07:31.672 Warning Temperature Time: 0 minutes 00:07:31.672 Critical Temperature Time: 0 minutes 00:07:31.672 00:07:31.672 Number of Queues 00:07:31.672 ================ 00:07:31.672 Number of I/O Submission Queues: 64 00:07:31.672 Number of I/O Completion Queues: 64 00:07:31.672 00:07:31.672 ZNS Specific Controller Data 00:07:31.672 ============================ 00:07:31.672 Zone Append Size Limit: 0 00:07:31.672 00:07:31.672 00:07:31.672 Active Namespaces 00:07:31.672 ================= 00:07:31.672 Namespace ID:1 00:07:31.672 Error Recovery Timeout: Unlimited 00:07:31.672 Command Set Identifier: NVM (00h) 00:07:31.672 Deallocate: Supported 00:07:31.672 Deallocated/Unwritten Error: Supported 00:07:31.672 Deallocated Read Value: All 0x00 00:07:31.672 Deallocate in Write Zeroes: Not Supported 00:07:31.672 Deallocated Guard Field: 0xFFFF 00:07:31.672 Flush: Supported 00:07:31.672 Reservation: Not Supported 00:07:31.672 Namespace Sharing Capabilities: Private 00:07:31.672 Size (in LBAs): 1048576 (4GiB) 00:07:31.672 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.672 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.672 Thin Provisioning: Not Supported 00:07:31.672 Per-NS Atomic Units: No 00:07:31.672 Maximum Single Source Range Length: 128 00:07:31.672 Maximum Copy Length: 128 00:07:31.672 Maximum Source Range Count: 128 00:07:31.672 NGUID/EUI64 Never Reused: No 00:07:31.672 Namespace Write Protected: No 00:07:31.672 Number of LBA Formats: 8 00:07:31.672 Current LBA Format: LBA Format #04 00:07:31.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.672 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.672 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.672 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.672 00:07:31.672 NVM Specific Namespace Data 00:07:31.672 =========================== 00:07:31.672 Logical Block Storage Tag Mask: 0 00:07:31.672 Protection Information Capabilities: 00:07:31.672 16b Guard Protection Information Storage Tag Support: No 00:07:31.672 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.672 Storage Tag Check Read Support: No 00:07:31.672 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Namespace ID:2 00:07:31.672 Error Recovery Timeout: Unlimited 00:07:31.672 Command Set Identifier: NVM (00h) 00:07:31.672 Deallocate: Supported 00:07:31.672 Deallocated/Unwritten Error: Supported 00:07:31.672 Deallocated Read Value: All 0x00 00:07:31.672 Deallocate in Write Zeroes: Not Supported 00:07:31.672 Deallocated Guard Field: 0xFFFF 00:07:31.672 Flush: Supported 00:07:31.672 Reservation: Not Supported 00:07:31.672 Namespace Sharing Capabilities: Private 00:07:31.672 Size (in LBAs): 1048576 (4GiB) 00:07:31.672 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.672 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.672 Thin Provisioning: Not Supported 00:07:31.672 Per-NS Atomic Units: No 00:07:31.672 Maximum Single Source Range Length: 128 00:07:31.672 Maximum Copy Length: 128 00:07:31.672 Maximum Source Range Count: 128 00:07:31.672 NGUID/EUI64 Never Reused: No 00:07:31.672 Namespace Write Protected: No 00:07:31.672 Number of LBA Formats: 8 00:07:31.672 Current LBA Format: LBA Format #04 00:07:31.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.672 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.672 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.672 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.672 00:07:31.672 NVM Specific Namespace Data 00:07:31.672 =========================== 00:07:31.672 Logical Block Storage Tag Mask: 0 00:07:31.672 Protection Information Capabilities: 00:07:31.672 16b Guard Protection Information Storage Tag Support: No 00:07:31.672 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.672 Storage Tag Check Read Support: No 00:07:31.672 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.672 Namespace ID:3 00:07:31.672 Error Recovery Timeout: Unlimited 00:07:31.672 Command Set Identifier: NVM (00h) 00:07:31.672 Deallocate: Supported 00:07:31.672 Deallocated/Unwritten Error: Supported 00:07:31.672 Deallocated Read Value: All 0x00 00:07:31.672 Deallocate in Write Zeroes: Not Supported 00:07:31.672 Deallocated Guard Field: 0xFFFF 00:07:31.672 Flush: Supported 00:07:31.672 Reservation: Not Supported 00:07:31.672 Namespace Sharing Capabilities: Private 00:07:31.672 Size (in LBAs): 1048576 (4GiB) 00:07:31.672 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.672 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.672 Thin Provisioning: Not Supported 00:07:31.672 Per-NS Atomic Units: No 00:07:31.672 Maximum Single Source Range Length: 128 00:07:31.672 Maximum Copy Length: 128 00:07:31.672 Maximum Source Range Count: 128 00:07:31.672 NGUID/EUI64 Never Reused: No 00:07:31.672 Namespace Write Protected: No 00:07:31.672 Number of LBA Formats: 8 00:07:31.672 Current LBA Format: LBA Format #04 00:07:31.672 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.672 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.672 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.672 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.672 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.672 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.672 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.672 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.672 00:07:31.672 NVM Specific Namespace Data 00:07:31.672 =========================== 00:07:31.672 Logical Block Storage Tag Mask: 0 00:07:31.672 Protection Information Capabilities: 00:07:31.672 16b Guard Protection Information Storage Tag Support: No 00:07:31.672 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.672 Storage Tag Check Read Support: No 00:07:31.673 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.673 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.673 17:24:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:31.932 ===================================================== 00:07:31.932 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:31.932 ===================================================== 00:07:31.932 Controller Capabilities/Features 00:07:31.932 ================================ 00:07:31.932 Vendor ID: 1b36 00:07:31.932 Subsystem Vendor ID: 1af4 00:07:31.932 Serial Number: 12343 00:07:31.932 Model Number: QEMU NVMe Ctrl 00:07:31.932 Firmware Version: 8.0.0 00:07:31.932 Recommended Arb Burst: 6 00:07:31.932 IEEE OUI Identifier: 00 54 52 00:07:31.932 Multi-path I/O 00:07:31.932 May have multiple subsystem ports: No 00:07:31.932 May have multiple controllers: Yes 00:07:31.932 Associated with SR-IOV VF: No 00:07:31.932 Max Data Transfer Size: 524288 00:07:31.932 Max Number of Namespaces: 256 00:07:31.932 Max Number of I/O Queues: 64 00:07:31.932 NVMe Specification Version (VS): 1.4 00:07:31.932 NVMe Specification Version (Identify): 1.4 00:07:31.932 Maximum Queue Entries: 2048 00:07:31.932 Contiguous Queues Required: Yes 00:07:31.932 Arbitration Mechanisms Supported 00:07:31.932 Weighted Round Robin: Not Supported 00:07:31.932 Vendor Specific: Not Supported 00:07:31.932 Reset Timeout: 7500 ms 00:07:31.932 Doorbell Stride: 4 bytes 00:07:31.932 NVM Subsystem Reset: Not Supported 00:07:31.932 Command Sets Supported 00:07:31.932 NVM Command Set: Supported 00:07:31.932 Boot Partition: Not Supported 00:07:31.932 Memory Page Size Minimum: 4096 bytes 00:07:31.932 Memory Page Size Maximum: 65536 bytes 00:07:31.932 Persistent Memory Region: Not Supported 00:07:31.932 Optional Asynchronous Events Supported 00:07:31.932 Namespace Attribute Notices: Supported 00:07:31.932 Firmware Activation Notices: Not Supported 00:07:31.932 ANA Change Notices: Not Supported 00:07:31.932 PLE Aggregate Log Change Notices: Not Supported 00:07:31.932 LBA Status Info Alert Notices: Not Supported 00:07:31.932 EGE Aggregate Log Change Notices: Not Supported 00:07:31.932 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.932 Zone Descriptor Change Notices: Not Supported 00:07:31.932 Discovery Log Change Notices: Not Supported 00:07:31.932 Controller Attributes 00:07:31.932 128-bit Host Identifier: Not Supported 00:07:31.932 Non-Operational Permissive Mode: Not Supported 00:07:31.932 NVM Sets: Not Supported 00:07:31.932 Read Recovery Levels: Not Supported 00:07:31.932 Endurance Groups: Supported 00:07:31.932 Predictable Latency Mode: Not Supported 00:07:31.932 Traffic Based Keep ALive: Not Supported 00:07:31.932 Namespace Granularity: Not Supported 00:07:31.932 SQ Associations: Not Supported 00:07:31.932 UUID List: Not Supported 00:07:31.932 Multi-Domain Subsystem: Not Supported 00:07:31.932 Fixed Capacity Management: Not Supported 00:07:31.932 Variable Capacity Management: Not Supported 00:07:31.932 Delete Endurance Group: Not Supported 00:07:31.932 Delete NVM Set: Not Supported 00:07:31.932 Extended LBA Formats Supported: Supported 00:07:31.932 Flexible Data Placement Supported: Supported 00:07:31.932 00:07:31.932 Controller Memory Buffer Support 00:07:31.932 ================================ 00:07:31.932 Supported: No 00:07:31.932 00:07:31.932 Persistent Memory Region Support 00:07:31.932 ================================ 00:07:31.932 Supported: No 00:07:31.932 00:07:31.932 Admin Command Set Attributes 00:07:31.932 ============================ 00:07:31.932 Security Send/Receive: Not Supported 00:07:31.932 Format NVM: Supported 00:07:31.932 Firmware Activate/Download: Not Supported 00:07:31.932 Namespace Management: Supported 00:07:31.932 Device Self-Test: Not Supported 00:07:31.932 Directives: Supported 00:07:31.932 NVMe-MI: Not Supported 00:07:31.932 Virtualization Management: Not Supported 00:07:31.932 Doorbell Buffer Config: Supported 00:07:31.932 Get LBA Status Capability: Not Supported 00:07:31.933 Command & Feature Lockdown Capability: Not Supported 00:07:31.933 Abort Command Limit: 4 00:07:31.933 Async Event Request Limit: 4 00:07:31.933 Number of Firmware Slots: N/A 00:07:31.933 Firmware Slot 1 Read-Only: N/A 00:07:31.933 Firmware Activation Without Reset: N/A 00:07:31.933 Multiple Update Detection Support: N/A 00:07:31.933 Firmware Update Granularity: No Information Provided 00:07:31.933 Per-Namespace SMART Log: Yes 00:07:31.933 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.933 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:31.933 Command Effects Log Page: Supported 00:07:31.933 Get Log Page Extended Data: Supported 00:07:31.933 Telemetry Log Pages: Not Supported 00:07:31.933 Persistent Event Log Pages: Not Supported 00:07:31.933 Supported Log Pages Log Page: May Support 00:07:31.933 Commands Supported & Effects Log Page: Not Supported 00:07:31.933 Feature Identifiers & Effects Log Page:May Support 00:07:31.933 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.933 Data Area 4 for Telemetry Log: Not Supported 00:07:31.933 Error Log Page Entries Supported: 1 00:07:31.933 Keep Alive: Not Supported 00:07:31.933 00:07:31.933 NVM Command Set Attributes 00:07:31.933 ========================== 00:07:31.933 Submission Queue Entry Size 00:07:31.933 Max: 64 00:07:31.933 Min: 64 00:07:31.933 Completion Queue Entry Size 00:07:31.933 Max: 16 00:07:31.933 Min: 16 00:07:31.933 Number of Namespaces: 256 00:07:31.933 Compare Command: Supported 00:07:31.933 Write Uncorrectable Command: Not Supported 00:07:31.933 Dataset Management Command: Supported 00:07:31.933 Write Zeroes Command: Supported 00:07:31.933 Set Features Save Field: Supported 00:07:31.933 Reservations: Not Supported 00:07:31.933 Timestamp: Supported 00:07:31.933 Copy: Supported 00:07:31.933 Volatile Write Cache: Present 00:07:31.933 Atomic Write Unit (Normal): 1 00:07:31.933 Atomic Write Unit (PFail): 1 00:07:31.933 Atomic Compare & Write Unit: 1 00:07:31.933 Fused Compare & Write: Not Supported 00:07:31.933 Scatter-Gather List 00:07:31.933 SGL Command Set: Supported 00:07:31.933 SGL Keyed: Not Supported 00:07:31.933 SGL Bit Bucket Descriptor: Not Supported 00:07:31.933 SGL Metadata Pointer: Not Supported 00:07:31.933 Oversized SGL: Not Supported 00:07:31.933 SGL Metadata Address: Not Supported 00:07:31.933 SGL Offset: Not Supported 00:07:31.933 Transport SGL Data Block: Not Supported 00:07:31.933 Replay Protected Memory Block: Not Supported 00:07:31.933 00:07:31.933 Firmware Slot Information 00:07:31.933 ========================= 00:07:31.933 Active slot: 1 00:07:31.933 Slot 1 Firmware Revision: 1.0 00:07:31.933 00:07:31.933 00:07:31.933 Commands Supported and Effects 00:07:31.933 ============================== 00:07:31.933 Admin Commands 00:07:31.933 -------------- 00:07:31.933 Delete I/O Submission Queue (00h): Supported 00:07:31.933 Create I/O Submission Queue (01h): Supported 00:07:31.933 Get Log Page (02h): Supported 00:07:31.933 Delete I/O Completion Queue (04h): Supported 00:07:31.933 Create I/O Completion Queue (05h): Supported 00:07:31.933 Identify (06h): Supported 00:07:31.933 Abort (08h): Supported 00:07:31.933 Set Features (09h): Supported 00:07:31.933 Get Features (0Ah): Supported 00:07:31.933 Asynchronous Event Request (0Ch): Supported 00:07:31.933 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.933 Directive Send (19h): Supported 00:07:31.933 Directive Receive (1Ah): Supported 00:07:31.933 Virtualization Management (1Ch): Supported 00:07:31.933 Doorbell Buffer Config (7Ch): Supported 00:07:31.933 Format NVM (80h): Supported LBA-Change 00:07:31.933 I/O Commands 00:07:31.933 ------------ 00:07:31.933 Flush (00h): Supported LBA-Change 00:07:31.933 Write (01h): Supported LBA-Change 00:07:31.933 Read (02h): Supported 00:07:31.933 Compare (05h): Supported 00:07:31.933 Write Zeroes (08h): Supported LBA-Change 00:07:31.933 Dataset Management (09h): Supported LBA-Change 00:07:31.933 Unknown (0Ch): Supported 00:07:31.933 Unknown (12h): Supported 00:07:31.933 Copy (19h): Supported LBA-Change 00:07:31.933 Unknown (1Dh): Supported LBA-Change 00:07:31.933 00:07:31.933 Error Log 00:07:31.933 ========= 00:07:31.933 00:07:31.933 Arbitration 00:07:31.933 =========== 00:07:31.933 Arbitration Burst: no limit 00:07:31.933 00:07:31.933 Power Management 00:07:31.933 ================ 00:07:31.933 Number of Power States: 1 00:07:31.933 Current Power State: Power State #0 00:07:31.933 Power State #0: 00:07:31.933 Max Power: 25.00 W 00:07:31.933 Non-Operational State: Operational 00:07:31.933 Entry Latency: 16 microseconds 00:07:31.933 Exit Latency: 4 microseconds 00:07:31.933 Relative Read Throughput: 0 00:07:31.933 Relative Read Latency: 0 00:07:31.933 Relative Write Throughput: 0 00:07:31.933 Relative Write Latency: 0 00:07:31.933 Idle Power: Not Reported 00:07:31.933 Active Power: Not Reported 00:07:31.933 Non-Operational Permissive Mode: Not Supported 00:07:31.933 00:07:31.933 Health Information 00:07:31.933 ================== 00:07:31.933 Critical Warnings: 00:07:31.933 Available Spare Space: OK 00:07:31.933 Temperature: OK 00:07:31.933 Device Reliability: OK 00:07:31.933 Read Only: No 00:07:31.933 Volatile Memory Backup: OK 00:07:31.933 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.933 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.933 Available Spare: 0% 00:07:31.933 Available Spare Threshold: 0% 00:07:31.933 Life Percentage Used: 0% 00:07:31.933 Data Units Read: 1217 00:07:31.933 Data Units Written: 1146 00:07:31.933 Host Read Commands: 44266 00:07:31.933 Host Write Commands: 43690 00:07:31.933 Controller Busy Time: 0 minutes 00:07:31.933 Power Cycles: 0 00:07:31.933 Power On Hours: 0 hours 00:07:31.933 Unsafe Shutdowns: 0 00:07:31.933 Unrecoverable Media Errors: 0 00:07:31.933 Lifetime Error Log Entries: 0 00:07:31.933 Warning Temperature Time: 0 minutes 00:07:31.933 Critical Temperature Time: 0 minutes 00:07:31.933 00:07:31.933 Number of Queues 00:07:31.933 ================ 00:07:31.933 Number of I/O Submission Queues: 64 00:07:31.933 Number of I/O Completion Queues: 64 00:07:31.933 00:07:31.933 ZNS Specific Controller Data 00:07:31.933 ============================ 00:07:31.933 Zone Append Size Limit: 0 00:07:31.933 00:07:31.933 00:07:31.933 Active Namespaces 00:07:31.933 ================= 00:07:31.933 Namespace ID:1 00:07:31.933 Error Recovery Timeout: Unlimited 00:07:31.933 Command Set Identifier: NVM (00h) 00:07:31.933 Deallocate: Supported 00:07:31.933 Deallocated/Unwritten Error: Supported 00:07:31.933 Deallocated Read Value: All 0x00 00:07:31.933 Deallocate in Write Zeroes: Not Supported 00:07:31.933 Deallocated Guard Field: 0xFFFF 00:07:31.933 Flush: Supported 00:07:31.933 Reservation: Not Supported 00:07:31.933 Namespace Sharing Capabilities: Multiple Controllers 00:07:31.933 Size (in LBAs): 262144 (1GiB) 00:07:31.933 Capacity (in LBAs): 262144 (1GiB) 00:07:31.933 Utilization (in LBAs): 262144 (1GiB) 00:07:31.933 Thin Provisioning: Not Supported 00:07:31.933 Per-NS Atomic Units: No 00:07:31.933 Maximum Single Source Range Length: 128 00:07:31.933 Maximum Copy Length: 128 00:07:31.933 Maximum Source Range Count: 128 00:07:31.933 NGUID/EUI64 Never Reused: No 00:07:31.933 Namespace Write Protected: No 00:07:31.933 Endurance group ID: 1 00:07:31.933 Number of LBA Formats: 8 00:07:31.933 Current LBA Format: LBA Format #04 00:07:31.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.933 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.933 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.933 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.933 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.933 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.933 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.933 00:07:31.933 Get Feature FDP: 00:07:31.933 ================ 00:07:31.933 Enabled: Yes 00:07:31.933 FDP configuration index: 0 00:07:31.933 00:07:31.933 FDP configurations log page 00:07:31.933 =========================== 00:07:31.933 Number of FDP configurations: 1 00:07:31.933 Version: 0 00:07:31.933 Size: 112 00:07:31.933 FDP Configuration Descriptor: 0 00:07:31.933 Descriptor Size: 96 00:07:31.933 Reclaim Group Identifier format: 2 00:07:31.933 FDP Volatile Write Cache: Not Present 00:07:31.933 FDP Configuration: Valid 00:07:31.933 Vendor Specific Size: 0 00:07:31.933 Number of Reclaim Groups: 2 00:07:31.933 Number of Recalim Unit Handles: 8 00:07:31.933 Max Placement Identifiers: 128 00:07:31.934 Number of Namespaces Suppprted: 256 00:07:31.934 Reclaim unit Nominal Size: 6000000 bytes 00:07:31.934 Estimated Reclaim Unit Time Limit: Not Reported 00:07:31.934 RUH Desc #000: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #001: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #002: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #003: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #004: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #005: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #006: RUH Type: Initially Isolated 00:07:31.934 RUH Desc #007: RUH Type: Initially Isolated 00:07:31.934 00:07:31.934 FDP reclaim unit handle usage log page 00:07:31.934 ====================================== 00:07:31.934 Number of Reclaim Unit Handles: 8 00:07:31.934 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:31.934 RUH Usage Desc #001: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #002: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #003: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #004: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #005: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #006: RUH Attributes: Unused 00:07:31.934 RUH Usage Desc #007: RUH Attributes: Unused 00:07:31.934 00:07:31.934 FDP statistics log page 00:07:31.934 ======================= 00:07:31.934 Host bytes with metadata written: 688300032 00:07:31.934 Media bytes with metadata written: 688390144 00:07:31.934 Media bytes erased: 0 00:07:31.934 00:07:31.934 FDP events log page 00:07:31.934 =================== 00:07:31.934 Number of FDP events: 0 00:07:31.934 00:07:31.934 NVM Specific Namespace Data 00:07:31.934 =========================== 00:07:31.934 Logical Block Storage Tag Mask: 0 00:07:31.934 Protection Information Capabilities: 00:07:31.934 16b Guard Protection Information Storage Tag Support: No 00:07:31.934 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.934 Storage Tag Check Read Support: No 00:07:31.934 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.934 00:07:31.934 real 0m1.231s 00:07:31.934 user 0m0.464s 00:07:31.934 sys 0m0.531s 00:07:31.934 17:24:05 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.934 ************************************ 00:07:31.934 END TEST nvme_identify 00:07:31.934 ************************************ 00:07:31.934 17:24:05 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:31.934 17:24:05 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:31.934 17:24:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.934 17:24:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.934 17:24:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.934 ************************************ 00:07:31.934 START TEST nvme_perf 00:07:31.934 ************************************ 00:07:31.934 17:24:05 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:31.934 17:24:05 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:33.320 Initializing NVMe Controllers 00:07:33.320 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:33.320 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:33.320 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.320 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.320 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:33.320 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:33.320 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:33.320 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:33.320 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:33.320 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:33.320 Initialization complete. Launching workers. 00:07:33.320 ======================================================== 00:07:33.320 Latency(us) 00:07:33.320 Device Information : IOPS MiB/s Average min max 00:07:33.320 PCIE (0000:00:10.0) NSID 1 from core 0: 8097.42 94.89 15831.52 10600.76 40885.39 00:07:33.320 PCIE (0000:00:11.0) NSID 1 from core 0: 8097.42 94.89 15810.56 10570.72 40134.68 00:07:33.320 PCIE (0000:00:13.0) NSID 1 from core 0: 8097.42 94.89 15787.59 10121.49 40478.99 00:07:33.320 PCIE (0000:00:12.0) NSID 1 from core 0: 8097.42 94.89 15764.03 10530.28 39885.57 00:07:33.320 PCIE (0000:00:12.0) NSID 2 from core 0: 8097.42 94.89 15739.86 9798.20 39427.23 00:07:33.320 PCIE (0000:00:12.0) NSID 3 from core 0: 8161.18 95.64 15592.91 9150.73 27988.95 00:07:33.320 ======================================================== 00:07:33.320 Total : 48648.26 570.10 15754.20 9150.73 40885.39 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 11342.769us 00:07:33.320 10.00000% : 13107.200us 00:07:33.320 25.00000% : 14417.920us 00:07:33.320 50.00000% : 15627.815us 00:07:33.320 75.00000% : 16837.711us 00:07:33.320 90.00000% : 18148.431us 00:07:33.320 95.00000% : 18955.028us 00:07:33.320 98.00000% : 20265.748us 00:07:33.320 99.00000% : 29642.437us 00:07:33.320 99.50000% : 39119.951us 00:07:33.320 99.90000% : 40531.495us 00:07:33.320 99.99000% : 40934.794us 00:07:33.320 99.99900% : 40934.794us 00:07:33.320 99.99990% : 40934.794us 00:07:33.320 99.99999% : 40934.794us 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 11191.532us 00:07:33.320 10.00000% : 13107.200us 00:07:33.320 25.00000% : 14417.920us 00:07:33.320 50.00000% : 15728.640us 00:07:33.320 75.00000% : 16736.886us 00:07:33.320 90.00000% : 18148.431us 00:07:33.320 95.00000% : 19156.677us 00:07:33.320 98.00000% : 20164.923us 00:07:33.320 99.00000% : 28835.840us 00:07:33.320 99.50000% : 38716.652us 00:07:33.320 99.90000% : 39926.548us 00:07:33.320 99.99000% : 40329.846us 00:07:33.320 99.99900% : 40329.846us 00:07:33.320 99.99990% : 40329.846us 00:07:33.320 99.99999% : 40329.846us 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 10989.883us 00:07:33.320 10.00000% : 13006.375us 00:07:33.320 25.00000% : 14417.920us 00:07:33.320 50.00000% : 15627.815us 00:07:33.320 75.00000% : 16837.711us 00:07:33.320 90.00000% : 18350.080us 00:07:33.320 95.00000% : 19358.326us 00:07:33.320 98.00000% : 20467.397us 00:07:33.320 99.00000% : 28835.840us 00:07:33.320 99.50000% : 39119.951us 00:07:33.320 99.90000% : 40329.846us 00:07:33.320 99.99000% : 40531.495us 00:07:33.320 99.99900% : 40531.495us 00:07:33.320 99.99990% : 40531.495us 00:07:33.320 99.99999% : 40531.495us 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 11141.120us 00:07:33.320 10.00000% : 13107.200us 00:07:33.320 25.00000% : 14317.095us 00:07:33.320 50.00000% : 15627.815us 00:07:33.320 75.00000% : 16736.886us 00:07:33.320 90.00000% : 18249.255us 00:07:33.320 95.00000% : 18955.028us 00:07:33.320 98.00000% : 20467.397us 00:07:33.320 99.00000% : 27625.945us 00:07:33.320 99.50000% : 38515.003us 00:07:33.320 99.90000% : 39724.898us 00:07:33.320 99.99000% : 39926.548us 00:07:33.320 99.99900% : 39926.548us 00:07:33.320 99.99990% : 39926.548us 00:07:33.320 99.99999% : 39926.548us 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 10788.234us 00:07:33.320 10.00000% : 13006.375us 00:07:33.320 25.00000% : 14317.095us 00:07:33.320 50.00000% : 15728.640us 00:07:33.320 75.00000% : 16938.535us 00:07:33.320 90.00000% : 18047.606us 00:07:33.320 95.00000% : 18652.554us 00:07:33.320 98.00000% : 20467.397us 00:07:33.320 99.00000% : 27222.646us 00:07:33.320 99.50000% : 37910.055us 00:07:33.320 99.90000% : 39119.951us 00:07:33.320 99.99000% : 39523.249us 00:07:33.320 99.99900% : 39523.249us 00:07:33.320 99.99990% : 39523.249us 00:07:33.320 99.99999% : 39523.249us 00:07:33.320 00:07:33.320 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.320 ================================================================================= 00:07:33.320 1.00000% : 10989.883us 00:07:33.320 10.00000% : 13107.200us 00:07:33.320 25.00000% : 14317.095us 00:07:33.320 50.00000% : 15526.991us 00:07:33.320 75.00000% : 16837.711us 00:07:33.320 90.00000% : 18047.606us 00:07:33.320 95.00000% : 18652.554us 00:07:33.320 98.00000% : 19660.800us 00:07:33.320 99.00000% : 20568.222us 00:07:33.320 99.50000% : 26617.698us 00:07:33.320 99.90000% : 27827.594us 00:07:33.320 99.99000% : 28029.243us 00:07:33.320 99.99900% : 28029.243us 00:07:33.320 99.99990% : 28029.243us 00:07:33.320 99.99999% : 28029.243us 00:07:33.320 00:07:33.320 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.320 ============================================================================== 00:07:33.320 Range in us Cumulative IO count 00:07:33.320 10586.585 - 10636.997: 0.0369% ( 3) 00:07:33.320 10636.997 - 10687.409: 0.0861% ( 4) 00:07:33.320 10687.409 - 10737.822: 0.0984% ( 1) 00:07:33.320 10737.822 - 10788.234: 0.1476% ( 4) 00:07:33.320 10788.234 - 10838.646: 0.2215% ( 6) 00:07:33.320 10838.646 - 10889.058: 0.3199% ( 8) 00:07:33.320 10889.058 - 10939.471: 0.4306% ( 9) 00:07:33.320 10939.471 - 10989.883: 0.4675% ( 3) 00:07:33.320 10989.883 - 11040.295: 0.4921% ( 2) 00:07:33.320 11040.295 - 11090.708: 0.5782% ( 7) 00:07:33.320 11090.708 - 11141.120: 0.6275% ( 4) 00:07:33.320 11141.120 - 11191.532: 0.6890% ( 5) 00:07:33.320 11191.532 - 11241.945: 0.7751% ( 7) 00:07:33.320 11241.945 - 11292.357: 0.9350% ( 13) 00:07:33.320 11292.357 - 11342.769: 1.1073% ( 14) 00:07:33.320 11342.769 - 11393.182: 1.2672% ( 13) 00:07:33.321 11393.182 - 11443.594: 1.4149% ( 12) 00:07:33.321 11443.594 - 11494.006: 1.5502% ( 11) 00:07:33.321 11494.006 - 11544.418: 1.6855% ( 11) 00:07:33.321 11544.418 - 11594.831: 1.8701% ( 15) 00:07:33.321 11594.831 - 11645.243: 1.9808% ( 9) 00:07:33.321 11645.243 - 11695.655: 2.1038% ( 10) 00:07:33.321 11695.655 - 11746.068: 2.2884% ( 15) 00:07:33.321 11746.068 - 11796.480: 2.3499% ( 5) 00:07:33.321 11796.480 - 11846.892: 2.5468% ( 16) 00:07:33.321 11846.892 - 11897.305: 2.6575% ( 9) 00:07:33.321 11897.305 - 11947.717: 2.8297% ( 14) 00:07:33.321 11947.717 - 11998.129: 3.0266% ( 16) 00:07:33.321 11998.129 - 12048.542: 3.1619% ( 11) 00:07:33.321 12048.542 - 12098.954: 3.2972% ( 11) 00:07:33.321 12098.954 - 12149.366: 3.4203% ( 10) 00:07:33.321 12149.366 - 12199.778: 3.5679% ( 12) 00:07:33.321 12199.778 - 12250.191: 3.7648% ( 16) 00:07:33.321 12250.191 - 12300.603: 3.9493% ( 15) 00:07:33.321 12300.603 - 12351.015: 4.1339% ( 15) 00:07:33.321 12351.015 - 12401.428: 4.4045% ( 22) 00:07:33.321 12401.428 - 12451.840: 4.6875% ( 23) 00:07:33.321 12451.840 - 12502.252: 5.0935% ( 33) 00:07:33.321 12502.252 - 12552.665: 5.4995% ( 33) 00:07:33.321 12552.665 - 12603.077: 5.8317% ( 27) 00:07:33.321 12603.077 - 12653.489: 6.2008% ( 30) 00:07:33.321 12653.489 - 12703.902: 6.7667% ( 46) 00:07:33.321 12703.902 - 12754.314: 7.2958% ( 43) 00:07:33.321 12754.314 - 12804.726: 7.6895% ( 32) 00:07:33.321 12804.726 - 12855.138: 8.2431% ( 45) 00:07:33.321 12855.138 - 12905.551: 8.7598% ( 42) 00:07:33.321 12905.551 - 13006.375: 9.6826% ( 75) 00:07:33.321 13006.375 - 13107.200: 10.8760% ( 97) 00:07:33.321 13107.200 - 13208.025: 12.0325% ( 94) 00:07:33.321 13208.025 - 13308.849: 13.1029% ( 87) 00:07:33.321 13308.849 - 13409.674: 14.2470% ( 93) 00:07:33.321 13409.674 - 13510.498: 15.5389% ( 105) 00:07:33.321 13510.498 - 13611.323: 16.9906% ( 118) 00:07:33.321 13611.323 - 13712.148: 17.9011% ( 74) 00:07:33.321 13712.148 - 13812.972: 18.8607% ( 78) 00:07:33.321 13812.972 - 13913.797: 19.7835% ( 75) 00:07:33.321 13913.797 - 14014.622: 20.9523% ( 95) 00:07:33.321 14014.622 - 14115.446: 21.8504% ( 73) 00:07:33.321 14115.446 - 14216.271: 22.9454% ( 89) 00:07:33.321 14216.271 - 14317.095: 24.3233% ( 112) 00:07:33.321 14317.095 - 14417.920: 25.7628% ( 117) 00:07:33.321 14417.920 - 14518.745: 27.4114% ( 134) 00:07:33.321 14518.745 - 14619.569: 29.2569% ( 150) 00:07:33.321 14619.569 - 14720.394: 30.6594% ( 114) 00:07:33.321 14720.394 - 14821.218: 32.2589% ( 130) 00:07:33.321 14821.218 - 14922.043: 34.2397% ( 161) 00:07:33.321 14922.043 - 15022.868: 36.3312% ( 170) 00:07:33.321 15022.868 - 15123.692: 38.5458% ( 180) 00:07:33.321 15123.692 - 15224.517: 40.8342% ( 186) 00:07:33.321 15224.517 - 15325.342: 43.2948% ( 200) 00:07:33.321 15325.342 - 15426.166: 45.6447% ( 191) 00:07:33.321 15426.166 - 15526.991: 47.7608% ( 172) 00:07:33.321 15526.991 - 15627.815: 50.1230% ( 192) 00:07:33.321 15627.815 - 15728.640: 52.5837% ( 200) 00:07:33.321 15728.640 - 15829.465: 55.0197% ( 198) 00:07:33.321 15829.465 - 15930.289: 57.3450% ( 189) 00:07:33.321 15930.289 - 16031.114: 59.8794% ( 206) 00:07:33.321 16031.114 - 16131.938: 62.1063% ( 181) 00:07:33.321 16131.938 - 16232.763: 64.2224% ( 172) 00:07:33.321 16232.763 - 16333.588: 66.3017% ( 169) 00:07:33.321 16333.588 - 16434.412: 68.0733% ( 144) 00:07:33.321 16434.412 - 16535.237: 70.0295% ( 159) 00:07:33.321 16535.237 - 16636.062: 71.5551% ( 124) 00:07:33.321 16636.062 - 16736.886: 73.5974% ( 166) 00:07:33.321 16736.886 - 16837.711: 75.2953% ( 138) 00:07:33.321 16837.711 - 16938.535: 77.0546% ( 143) 00:07:33.321 16938.535 - 17039.360: 78.4449% ( 113) 00:07:33.321 17039.360 - 17140.185: 80.0197% ( 128) 00:07:33.321 17140.185 - 17241.009: 81.7667% ( 142) 00:07:33.321 17241.009 - 17341.834: 82.9601% ( 97) 00:07:33.321 17341.834 - 17442.658: 83.8829% ( 75) 00:07:33.321 17442.658 - 17543.483: 84.8056% ( 75) 00:07:33.321 17543.483 - 17644.308: 85.7653% ( 78) 00:07:33.321 17644.308 - 17745.132: 86.8110% ( 85) 00:07:33.321 17745.132 - 17845.957: 87.8691% ( 86) 00:07:33.321 17845.957 - 17946.782: 88.7672% ( 73) 00:07:33.321 17946.782 - 18047.606: 89.5546% ( 64) 00:07:33.321 18047.606 - 18148.431: 90.2928% ( 60) 00:07:33.321 18148.431 - 18249.255: 91.1663% ( 71) 00:07:33.321 18249.255 - 18350.080: 91.8922% ( 59) 00:07:33.321 18350.080 - 18450.905: 92.4705% ( 47) 00:07:33.321 18450.905 - 18551.729: 93.0979% ( 51) 00:07:33.321 18551.729 - 18652.554: 93.7992% ( 57) 00:07:33.321 18652.554 - 18753.378: 94.2052% ( 33) 00:07:33.321 18753.378 - 18854.203: 94.6358% ( 35) 00:07:33.321 18854.203 - 18955.028: 95.0172% ( 31) 00:07:33.321 18955.028 - 19055.852: 95.2879% ( 22) 00:07:33.321 19055.852 - 19156.677: 95.7431% ( 37) 00:07:33.321 19156.677 - 19257.502: 95.9031% ( 13) 00:07:33.321 19257.502 - 19358.326: 96.1368% ( 19) 00:07:33.321 19358.326 - 19459.151: 96.3091% ( 14) 00:07:33.321 19459.151 - 19559.975: 96.6658% ( 29) 00:07:33.321 19559.975 - 19660.800: 96.9488% ( 23) 00:07:33.321 19660.800 - 19761.625: 97.1088% ( 13) 00:07:33.321 19761.625 - 19862.449: 97.2318% ( 10) 00:07:33.321 19862.449 - 19963.274: 97.4040% ( 14) 00:07:33.321 19963.274 - 20064.098: 97.6255% ( 18) 00:07:33.321 20064.098 - 20164.923: 97.8469% ( 18) 00:07:33.321 20164.923 - 20265.748: 98.0438% ( 16) 00:07:33.321 20265.748 - 20366.572: 98.1668% ( 10) 00:07:33.321 20366.572 - 20467.397: 98.2037% ( 3) 00:07:33.321 20467.397 - 20568.222: 98.3145% ( 9) 00:07:33.321 20568.222 - 20669.046: 98.3268% ( 1) 00:07:33.321 20669.046 - 20769.871: 98.3760% ( 4) 00:07:33.321 20769.871 - 20870.695: 98.3883% ( 1) 00:07:33.321 20971.520 - 21072.345: 98.4252% ( 3) 00:07:33.321 28230.892 - 28432.542: 98.4867% ( 5) 00:07:33.321 28432.542 - 28634.191: 98.5728% ( 7) 00:07:33.321 28634.191 - 28835.840: 98.6713% ( 8) 00:07:33.321 28835.840 - 29037.489: 98.7574% ( 7) 00:07:33.321 29037.489 - 29239.138: 98.8558% ( 8) 00:07:33.321 29239.138 - 29440.788: 98.9542% ( 8) 00:07:33.321 29440.788 - 29642.437: 99.0650% ( 9) 00:07:33.321 29642.437 - 29844.086: 99.1511% ( 7) 00:07:33.321 29844.086 - 30045.735: 99.2126% ( 5) 00:07:33.321 37910.055 - 38111.705: 99.2249% ( 1) 00:07:33.321 38111.705 - 38313.354: 99.2618% ( 3) 00:07:33.321 38313.354 - 38515.003: 99.3233% ( 5) 00:07:33.321 38515.003 - 38716.652: 99.3356% ( 1) 00:07:33.321 38716.652 - 38918.302: 99.3725% ( 3) 00:07:33.321 38918.302 - 39119.951: 99.5325% ( 13) 00:07:33.321 39119.951 - 39321.600: 99.5448% ( 1) 00:07:33.321 39321.600 - 39523.249: 99.6186% ( 6) 00:07:33.321 39523.249 - 39724.898: 99.6678% ( 4) 00:07:33.321 39724.898 - 39926.548: 99.7170% ( 4) 00:07:33.321 40128.197 - 40329.846: 99.7539% ( 3) 00:07:33.321 40329.846 - 40531.495: 99.9754% ( 18) 00:07:33.321 40733.145 - 40934.794: 100.0000% ( 2) 00:07:33.321 00:07:33.321 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.321 ============================================================================== 00:07:33.321 Range in us Cumulative IO count 00:07:33.321 10536.172 - 10586.585: 0.0246% ( 2) 00:07:33.321 10586.585 - 10636.997: 0.0615% ( 3) 00:07:33.321 10636.997 - 10687.409: 0.0984% ( 3) 00:07:33.321 10687.409 - 10737.822: 0.1476% ( 4) 00:07:33.321 10737.822 - 10788.234: 0.2092% ( 5) 00:07:33.321 10788.234 - 10838.646: 0.2953% ( 7) 00:07:33.321 10838.646 - 10889.058: 0.3691% ( 6) 00:07:33.321 10889.058 - 10939.471: 0.4552% ( 7) 00:07:33.321 10939.471 - 10989.883: 0.6029% ( 12) 00:07:33.321 10989.883 - 11040.295: 0.7136% ( 9) 00:07:33.321 11040.295 - 11090.708: 0.8366% ( 10) 00:07:33.321 11090.708 - 11141.120: 0.9719% ( 11) 00:07:33.321 11141.120 - 11191.532: 1.1073% ( 11) 00:07:33.321 11191.532 - 11241.945: 1.2303% ( 10) 00:07:33.321 11241.945 - 11292.357: 1.4026% ( 14) 00:07:33.321 11292.357 - 11342.769: 1.5625% ( 13) 00:07:33.321 11342.769 - 11393.182: 1.7224% ( 13) 00:07:33.321 11393.182 - 11443.594: 1.9070% ( 15) 00:07:33.321 11443.594 - 11494.006: 2.0546% ( 12) 00:07:33.321 11494.006 - 11544.418: 2.2269% ( 14) 00:07:33.321 11544.418 - 11594.831: 2.3991% ( 14) 00:07:33.321 11594.831 - 11645.243: 2.5098% ( 9) 00:07:33.321 11645.243 - 11695.655: 2.6329% ( 10) 00:07:33.321 11695.655 - 11746.068: 2.7682% ( 11) 00:07:33.321 11746.068 - 11796.480: 2.8912% ( 10) 00:07:33.321 11796.480 - 11846.892: 3.0266% ( 11) 00:07:33.321 11846.892 - 11897.305: 3.1619% ( 11) 00:07:33.321 11897.305 - 11947.717: 3.3095% ( 12) 00:07:33.321 11947.717 - 11998.129: 3.4326% ( 10) 00:07:33.321 11998.129 - 12048.542: 3.6294% ( 16) 00:07:33.321 12048.542 - 12098.954: 3.7894% ( 13) 00:07:33.321 12098.954 - 12149.366: 3.9616% ( 14) 00:07:33.321 12149.366 - 12199.778: 4.1216% ( 13) 00:07:33.321 12199.778 - 12250.191: 4.3922% ( 22) 00:07:33.321 12250.191 - 12300.603: 4.6137% ( 18) 00:07:33.321 12300.603 - 12351.015: 4.8597% ( 20) 00:07:33.321 12351.015 - 12401.428: 5.1550% ( 24) 00:07:33.321 12401.428 - 12451.840: 5.4995% ( 28) 00:07:33.321 12451.840 - 12502.252: 5.8809% ( 31) 00:07:33.321 12502.252 - 12552.665: 6.2254% ( 28) 00:07:33.321 12552.665 - 12603.077: 6.5699% ( 28) 00:07:33.321 12603.077 - 12653.489: 6.9390% ( 30) 00:07:33.321 12653.489 - 12703.902: 7.2466% ( 25) 00:07:33.321 12703.902 - 12754.314: 7.6649% ( 34) 00:07:33.321 12754.314 - 12804.726: 8.0217% ( 29) 00:07:33.321 12804.726 - 12855.138: 8.4031% ( 31) 00:07:33.321 12855.138 - 12905.551: 8.7229% ( 26) 00:07:33.321 12905.551 - 13006.375: 9.7072% ( 80) 00:07:33.321 13006.375 - 13107.200: 10.7037% ( 81) 00:07:33.321 13107.200 - 13208.025: 11.7495% ( 85) 00:07:33.321 13208.025 - 13308.849: 12.9552% ( 98) 00:07:33.321 13308.849 - 13409.674: 14.1240% ( 95) 00:07:33.321 13409.674 - 13510.498: 15.2190% ( 89) 00:07:33.321 13510.498 - 13611.323: 16.2156% ( 81) 00:07:33.321 13611.323 - 13712.148: 17.4951% ( 104) 00:07:33.321 13712.148 - 13812.972: 18.7254% ( 100) 00:07:33.321 13812.972 - 13913.797: 19.7835% ( 86) 00:07:33.321 13913.797 - 14014.622: 20.7554% ( 79) 00:07:33.321 14014.622 - 14115.446: 21.5920% ( 68) 00:07:33.321 14115.446 - 14216.271: 22.7731% ( 96) 00:07:33.321 14216.271 - 14317.095: 24.2741% ( 122) 00:07:33.321 14317.095 - 14417.920: 25.8366% ( 127) 00:07:33.321 14417.920 - 14518.745: 27.3253% ( 121) 00:07:33.321 14518.745 - 14619.569: 28.7279% ( 114) 00:07:33.321 14619.569 - 14720.394: 30.1304% ( 114) 00:07:33.321 14720.394 - 14821.218: 31.6314% ( 122) 00:07:33.321 14821.218 - 14922.043: 33.3661% ( 141) 00:07:33.321 14922.043 - 15022.868: 35.3223% ( 159) 00:07:33.321 15022.868 - 15123.692: 37.7092% ( 194) 00:07:33.321 15123.692 - 15224.517: 39.9483% ( 182) 00:07:33.321 15224.517 - 15325.342: 42.1629% ( 180) 00:07:33.321 15325.342 - 15426.166: 44.3898% ( 181) 00:07:33.321 15426.166 - 15526.991: 46.9242% ( 206) 00:07:33.321 15526.991 - 15627.815: 49.6063% ( 218) 00:07:33.321 15627.815 - 15728.640: 52.2638% ( 216) 00:07:33.321 15728.640 - 15829.465: 55.1058% ( 231) 00:07:33.321 15829.465 - 15930.289: 57.5418% ( 198) 00:07:33.321 15930.289 - 16031.114: 60.0025% ( 200) 00:07:33.321 16031.114 - 16131.938: 62.2293% ( 181) 00:07:33.321 16131.938 - 16232.763: 64.5915% ( 192) 00:07:33.321 16232.763 - 16333.588: 67.0522% ( 200) 00:07:33.321 16333.588 - 16434.412: 69.4267% ( 193) 00:07:33.321 16434.412 - 16535.237: 71.6781% ( 183) 00:07:33.321 16535.237 - 16636.062: 73.7082% ( 165) 00:07:33.321 16636.062 - 16736.886: 75.5782% ( 152) 00:07:33.321 16736.886 - 16837.711: 77.2761% ( 138) 00:07:33.321 16837.711 - 16938.535: 78.7525% ( 120) 00:07:33.321 16938.535 - 17039.360: 80.0812% ( 108) 00:07:33.321 17039.360 - 17140.185: 81.1147% ( 84) 00:07:33.321 17140.185 - 17241.009: 82.2219% ( 90) 00:07:33.321 17241.009 - 17341.834: 83.4646% ( 101) 00:07:33.321 17341.834 - 17442.658: 84.3996% ( 76) 00:07:33.321 17442.658 - 17543.483: 85.2485% ( 69) 00:07:33.321 17543.483 - 17644.308: 86.0851% ( 68) 00:07:33.321 17644.308 - 17745.132: 87.0079% ( 75) 00:07:33.321 17745.132 - 17845.957: 87.8937% ( 72) 00:07:33.321 17845.957 - 17946.782: 88.7303% ( 68) 00:07:33.321 17946.782 - 18047.606: 89.4439% ( 58) 00:07:33.321 18047.606 - 18148.431: 90.2682% ( 67) 00:07:33.321 18148.431 - 18249.255: 90.9818% ( 58) 00:07:33.321 18249.255 - 18350.080: 91.5846% ( 49) 00:07:33.321 18350.080 - 18450.905: 91.9906% ( 33) 00:07:33.321 18450.905 - 18551.729: 92.5197% ( 43) 00:07:33.321 18551.729 - 18652.554: 93.0241% ( 41) 00:07:33.321 18652.554 - 18753.378: 93.4301% ( 33) 00:07:33.321 18753.378 - 18854.203: 93.8484% ( 34) 00:07:33.321 18854.203 - 18955.028: 94.2544% ( 33) 00:07:33.321 18955.028 - 19055.852: 94.7343% ( 39) 00:07:33.321 19055.852 - 19156.677: 95.1403% ( 33) 00:07:33.321 19156.677 - 19257.502: 95.3986% ( 21) 00:07:33.321 19257.502 - 19358.326: 95.8538% ( 37) 00:07:33.321 19358.326 - 19459.151: 96.2598% ( 33) 00:07:33.321 19459.151 - 19559.975: 96.6412% ( 31) 00:07:33.321 19559.975 - 19660.800: 96.9980% ( 29) 00:07:33.322 19660.800 - 19761.625: 97.3056% ( 25) 00:07:33.322 19761.625 - 19862.449: 97.4902% ( 15) 00:07:33.322 19862.449 - 19963.274: 97.6993% ( 17) 00:07:33.322 19963.274 - 20064.098: 97.8716% ( 14) 00:07:33.322 20064.098 - 20164.923: 98.0438% ( 14) 00:07:33.322 20164.923 - 20265.748: 98.1668% ( 10) 00:07:33.322 20265.748 - 20366.572: 98.2530% ( 7) 00:07:33.322 20366.572 - 20467.397: 98.3391% ( 7) 00:07:33.322 20467.397 - 20568.222: 98.3883% ( 4) 00:07:33.322 20568.222 - 20669.046: 98.4252% ( 3) 00:07:33.322 27222.646 - 27424.295: 98.4867% ( 5) 00:07:33.322 27424.295 - 27625.945: 98.5482% ( 5) 00:07:33.322 27625.945 - 27827.594: 98.6220% ( 6) 00:07:33.322 27827.594 - 28029.243: 98.7082% ( 7) 00:07:33.322 28029.243 - 28230.892: 98.7820% ( 6) 00:07:33.322 28230.892 - 28432.542: 98.8681% ( 7) 00:07:33.322 28432.542 - 28634.191: 98.9419% ( 6) 00:07:33.322 28634.191 - 28835.840: 99.0157% ( 6) 00:07:33.322 28835.840 - 29037.489: 99.1019% ( 7) 00:07:33.322 29037.489 - 29239.138: 99.1757% ( 6) 00:07:33.322 29239.138 - 29440.788: 99.2126% ( 3) 00:07:33.322 37506.757 - 37708.406: 99.2372% ( 2) 00:07:33.322 37708.406 - 37910.055: 99.2987% ( 5) 00:07:33.322 37910.055 - 38111.705: 99.3725% ( 6) 00:07:33.322 38111.705 - 38313.354: 99.4218% ( 4) 00:07:33.322 38313.354 - 38515.003: 99.4956% ( 6) 00:07:33.322 38515.003 - 38716.652: 99.5571% ( 5) 00:07:33.322 38716.652 - 38918.302: 99.6309% ( 6) 00:07:33.322 38918.302 - 39119.951: 99.6924% ( 5) 00:07:33.322 39119.951 - 39321.600: 99.7539% ( 5) 00:07:33.322 39321.600 - 39523.249: 99.8031% ( 4) 00:07:33.322 39523.249 - 39724.898: 99.8647% ( 5) 00:07:33.322 39724.898 - 39926.548: 99.9385% ( 6) 00:07:33.322 39926.548 - 40128.197: 99.9877% ( 4) 00:07:33.322 40128.197 - 40329.846: 100.0000% ( 1) 00:07:33.322 00:07:33.322 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.322 ============================================================================== 00:07:33.322 Range in us Cumulative IO count 00:07:33.322 10082.462 - 10132.874: 0.0369% ( 3) 00:07:33.322 10132.874 - 10183.286: 0.0615% ( 2) 00:07:33.322 10183.286 - 10233.698: 0.0984% ( 3) 00:07:33.322 10233.698 - 10284.111: 0.1353% ( 3) 00:07:33.322 10284.111 - 10334.523: 0.1845% ( 4) 00:07:33.322 10334.523 - 10384.935: 0.2215% ( 3) 00:07:33.322 10384.935 - 10435.348: 0.2584% ( 3) 00:07:33.322 10435.348 - 10485.760: 0.2953% ( 3) 00:07:33.322 10485.760 - 10536.172: 0.3322% ( 3) 00:07:33.322 10536.172 - 10586.585: 0.3691% ( 3) 00:07:33.322 10586.585 - 10636.997: 0.4060% ( 3) 00:07:33.322 10636.997 - 10687.409: 0.4552% ( 4) 00:07:33.322 10687.409 - 10737.822: 0.4921% ( 3) 00:07:33.322 10737.822 - 10788.234: 0.5782% ( 7) 00:07:33.322 10788.234 - 10838.646: 0.7259% ( 12) 00:07:33.322 10838.646 - 10889.058: 0.8489% ( 10) 00:07:33.322 10889.058 - 10939.471: 0.9719% ( 10) 00:07:33.322 10939.471 - 10989.883: 1.0704% ( 8) 00:07:33.322 10989.883 - 11040.295: 1.1688% ( 8) 00:07:33.322 11040.295 - 11090.708: 1.2918% ( 10) 00:07:33.322 11090.708 - 11141.120: 1.4149% ( 10) 00:07:33.322 11141.120 - 11191.532: 1.5748% ( 13) 00:07:33.322 11191.532 - 11241.945: 1.7347% ( 13) 00:07:33.322 11241.945 - 11292.357: 1.8824% ( 12) 00:07:33.322 11292.357 - 11342.769: 2.0177% ( 11) 00:07:33.322 11342.769 - 11393.182: 2.1900% ( 14) 00:07:33.322 11393.182 - 11443.594: 2.3499% ( 13) 00:07:33.322 11443.594 - 11494.006: 2.5221% ( 14) 00:07:33.322 11494.006 - 11544.418: 2.6821% ( 13) 00:07:33.322 11544.418 - 11594.831: 2.8420% ( 13) 00:07:33.322 11594.831 - 11645.243: 3.0143% ( 14) 00:07:33.322 11645.243 - 11695.655: 3.1865% ( 14) 00:07:33.322 11695.655 - 11746.068: 3.3095% ( 10) 00:07:33.322 11746.068 - 11796.480: 3.4572% ( 12) 00:07:33.322 11796.480 - 11846.892: 3.6663% ( 17) 00:07:33.322 11846.892 - 11897.305: 3.9124% ( 20) 00:07:33.322 11897.305 - 11947.717: 4.1339% ( 18) 00:07:33.322 11947.717 - 11998.129: 4.2938% ( 13) 00:07:33.322 11998.129 - 12048.542: 4.4537% ( 13) 00:07:33.322 12048.542 - 12098.954: 4.6260% ( 14) 00:07:33.322 12098.954 - 12149.366: 4.8597% ( 19) 00:07:33.322 12149.366 - 12199.778: 5.1427% ( 23) 00:07:33.322 12199.778 - 12250.191: 5.4134% ( 22) 00:07:33.322 12250.191 - 12300.603: 5.6964% ( 23) 00:07:33.322 12300.603 - 12351.015: 5.9547% ( 21) 00:07:33.322 12351.015 - 12401.428: 6.2500% ( 24) 00:07:33.322 12401.428 - 12451.840: 6.5207% ( 22) 00:07:33.322 12451.840 - 12502.252: 6.8529% ( 27) 00:07:33.322 12502.252 - 12552.665: 7.1481% ( 24) 00:07:33.322 12552.665 - 12603.077: 7.5910% ( 36) 00:07:33.322 12603.077 - 12653.489: 7.8494% ( 21) 00:07:33.322 12653.489 - 12703.902: 8.1447% ( 24) 00:07:33.322 12703.902 - 12754.314: 8.4400% ( 24) 00:07:33.322 12754.314 - 12804.726: 8.7844% ( 28) 00:07:33.322 12804.726 - 12855.138: 9.1412% ( 29) 00:07:33.322 12855.138 - 12905.551: 9.5719% ( 35) 00:07:33.322 12905.551 - 13006.375: 10.5315% ( 78) 00:07:33.322 13006.375 - 13107.200: 11.7003% ( 95) 00:07:33.322 13107.200 - 13208.025: 12.7338% ( 84) 00:07:33.322 13208.025 - 13308.849: 13.7672% ( 84) 00:07:33.322 13308.849 - 13409.674: 14.7884% ( 83) 00:07:33.322 13409.674 - 13510.498: 15.7849% ( 81) 00:07:33.322 13510.498 - 13611.323: 16.9414% ( 94) 00:07:33.322 13611.323 - 13712.148: 18.2087% ( 103) 00:07:33.322 13712.148 - 13812.972: 19.5497% ( 109) 00:07:33.322 13812.972 - 13913.797: 20.6324% ( 88) 00:07:33.322 13913.797 - 14014.622: 21.6412% ( 82) 00:07:33.322 14014.622 - 14115.446: 22.6870% ( 85) 00:07:33.322 14115.446 - 14216.271: 23.7943% ( 90) 00:07:33.322 14216.271 - 14317.095: 24.9139% ( 91) 00:07:33.322 14317.095 - 14417.920: 26.2795% ( 111) 00:07:33.322 14417.920 - 14518.745: 27.7313% ( 118) 00:07:33.322 14518.745 - 14619.569: 29.3799% ( 134) 00:07:33.322 14619.569 - 14720.394: 31.3361% ( 159) 00:07:33.322 14720.394 - 14821.218: 33.4154% ( 169) 00:07:33.322 14821.218 - 14922.043: 35.2977% ( 153) 00:07:33.322 14922.043 - 15022.868: 37.1063% ( 147) 00:07:33.322 15022.868 - 15123.692: 39.4439% ( 190) 00:07:33.322 15123.692 - 15224.517: 41.8922% ( 199) 00:07:33.322 15224.517 - 15325.342: 44.3898% ( 203) 00:07:33.322 15325.342 - 15426.166: 46.6781% ( 186) 00:07:33.322 15426.166 - 15526.991: 48.9542% ( 185) 00:07:33.322 15526.991 - 15627.815: 51.5625% ( 212) 00:07:33.322 15627.815 - 15728.640: 53.9985% ( 198) 00:07:33.322 15728.640 - 15829.465: 56.2623% ( 184) 00:07:33.322 15829.465 - 15930.289: 58.4769% ( 180) 00:07:33.322 15930.289 - 16031.114: 60.6791% ( 179) 00:07:33.322 16031.114 - 16131.938: 62.6969% ( 164) 00:07:33.322 16131.938 - 16232.763: 64.7638% ( 168) 00:07:33.322 16232.763 - 16333.588: 66.7323% ( 160) 00:07:33.322 16333.588 - 16434.412: 68.7377% ( 163) 00:07:33.322 16434.412 - 16535.237: 70.8415% ( 171) 00:07:33.322 16535.237 - 16636.062: 72.7608% ( 156) 00:07:33.322 16636.062 - 16736.886: 74.5325% ( 144) 00:07:33.322 16736.886 - 16837.711: 76.0335% ( 122) 00:07:33.322 16837.711 - 16938.535: 77.5960% ( 127) 00:07:33.322 16938.535 - 17039.360: 79.1093% ( 123) 00:07:33.322 17039.360 - 17140.185: 80.6718% ( 127) 00:07:33.322 17140.185 - 17241.009: 82.0989% ( 116) 00:07:33.322 17241.009 - 17341.834: 83.3907% ( 105) 00:07:33.322 17341.834 - 17442.658: 84.4980% ( 90) 00:07:33.322 17442.658 - 17543.483: 85.6176% ( 91) 00:07:33.322 17543.483 - 17644.308: 86.5650% ( 77) 00:07:33.322 17644.308 - 17745.132: 87.2908% ( 59) 00:07:33.322 17745.132 - 17845.957: 87.9429% ( 53) 00:07:33.322 17845.957 - 17946.782: 88.4966% ( 45) 00:07:33.322 17946.782 - 18047.606: 89.0010% ( 41) 00:07:33.322 18047.606 - 18148.431: 89.5054% ( 41) 00:07:33.322 18148.431 - 18249.255: 89.9729% ( 38) 00:07:33.322 18249.255 - 18350.080: 90.4158% ( 36) 00:07:33.322 18350.080 - 18450.905: 90.9449% ( 43) 00:07:33.322 18450.905 - 18551.729: 91.4124% ( 38) 00:07:33.322 18551.729 - 18652.554: 91.7569% ( 28) 00:07:33.322 18652.554 - 18753.378: 92.1875% ( 35) 00:07:33.322 18753.378 - 18854.203: 92.7657% ( 47) 00:07:33.322 18854.203 - 18955.028: 93.2456% ( 39) 00:07:33.322 18955.028 - 19055.852: 93.6639% ( 34) 00:07:33.322 19055.852 - 19156.677: 94.1437% ( 39) 00:07:33.322 19156.677 - 19257.502: 94.5128% ( 30) 00:07:33.322 19257.502 - 19358.326: 95.0049% ( 40) 00:07:33.322 19358.326 - 19459.151: 95.4601% ( 37) 00:07:33.322 19459.151 - 19559.975: 95.7923% ( 27) 00:07:33.322 19559.975 - 19660.800: 96.1491% ( 29) 00:07:33.322 19660.800 - 19761.625: 96.4936% ( 28) 00:07:33.322 19761.625 - 19862.449: 96.8381% ( 28) 00:07:33.322 19862.449 - 19963.274: 97.1334% ( 24) 00:07:33.322 19963.274 - 20064.098: 97.4163% ( 23) 00:07:33.322 20064.098 - 20164.923: 97.6255% ( 17) 00:07:33.322 20164.923 - 20265.748: 97.7731% ( 12) 00:07:33.322 20265.748 - 20366.572: 97.8962% ( 10) 00:07:33.322 20366.572 - 20467.397: 98.0192% ( 10) 00:07:33.322 20467.397 - 20568.222: 98.0684% ( 4) 00:07:33.322 20568.222 - 20669.046: 98.1176% ( 4) 00:07:33.322 20669.046 - 20769.871: 98.1668% ( 4) 00:07:33.322 20769.871 - 20870.695: 98.2283% ( 5) 00:07:33.322 20870.695 - 20971.520: 98.2776% ( 4) 00:07:33.322 20971.520 - 21072.345: 98.3268% ( 4) 00:07:33.322 21072.345 - 21173.169: 98.3883% ( 5) 00:07:33.322 21173.169 - 21273.994: 98.4252% ( 3) 00:07:33.322 27020.997 - 27222.646: 98.4498% ( 2) 00:07:33.322 27222.646 - 27424.295: 98.5236% ( 6) 00:07:33.322 27424.295 - 27625.945: 98.5974% ( 6) 00:07:33.322 27625.945 - 27827.594: 98.6713% ( 6) 00:07:33.322 27827.594 - 28029.243: 98.7451% ( 6) 00:07:33.322 28029.243 - 28230.892: 98.8189% ( 6) 00:07:33.322 28230.892 - 28432.542: 98.8927% ( 6) 00:07:33.322 28432.542 - 28634.191: 98.9665% ( 6) 00:07:33.322 28634.191 - 28835.840: 99.0404% ( 6) 00:07:33.322 28835.840 - 29037.489: 99.1142% ( 6) 00:07:33.322 29037.489 - 29239.138: 99.2003% ( 7) 00:07:33.322 29239.138 - 29440.788: 99.2126% ( 1) 00:07:33.322 38111.705 - 38313.354: 99.2618% ( 4) 00:07:33.322 38313.354 - 38515.003: 99.3356% ( 6) 00:07:33.322 38515.003 - 38716.652: 99.4094% ( 6) 00:07:33.322 38716.652 - 38918.302: 99.4833% ( 6) 00:07:33.322 38918.302 - 39119.951: 99.5571% ( 6) 00:07:33.322 39119.951 - 39321.600: 99.6309% ( 6) 00:07:33.322 39321.600 - 39523.249: 99.6924% ( 5) 00:07:33.322 39523.249 - 39724.898: 99.7539% ( 5) 00:07:33.322 39724.898 - 39926.548: 99.8155% ( 5) 00:07:33.322 39926.548 - 40128.197: 99.8647% ( 4) 00:07:33.322 40128.197 - 40329.846: 99.9385% ( 6) 00:07:33.322 40329.846 - 40531.495: 100.0000% ( 5) 00:07:33.322 00:07:33.322 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.322 ============================================================================== 00:07:33.322 Range in us Cumulative IO count 00:07:33.322 10485.760 - 10536.172: 0.0123% ( 1) 00:07:33.322 10536.172 - 10586.585: 0.0246% ( 1) 00:07:33.322 10586.585 - 10636.997: 0.0615% ( 3) 00:07:33.322 10636.997 - 10687.409: 0.0738% ( 1) 00:07:33.322 10687.409 - 10737.822: 0.1230% ( 4) 00:07:33.322 10737.822 - 10788.234: 0.1845% ( 5) 00:07:33.322 10788.234 - 10838.646: 0.2830% ( 8) 00:07:33.322 10838.646 - 10889.058: 0.3691% ( 7) 00:07:33.322 10889.058 - 10939.471: 0.4798% ( 9) 00:07:33.322 10939.471 - 10989.883: 0.6275% ( 12) 00:07:33.322 10989.883 - 11040.295: 0.8120% ( 15) 00:07:33.322 11040.295 - 11090.708: 0.9596% ( 12) 00:07:33.322 11090.708 - 11141.120: 1.1319% ( 14) 00:07:33.322 11141.120 - 11191.532: 1.2795% ( 12) 00:07:33.322 11191.532 - 11241.945: 1.4887% ( 17) 00:07:33.322 11241.945 - 11292.357: 1.6978% ( 17) 00:07:33.322 11292.357 - 11342.769: 1.9193% ( 18) 00:07:33.322 11342.769 - 11393.182: 2.0915% ( 14) 00:07:33.322 11393.182 - 11443.594: 2.2761% ( 15) 00:07:33.322 11443.594 - 11494.006: 2.4975% ( 18) 00:07:33.322 11494.006 - 11544.418: 2.7067% ( 17) 00:07:33.322 11544.418 - 11594.831: 2.9281% ( 18) 00:07:33.322 11594.831 - 11645.243: 3.1619% ( 19) 00:07:33.322 11645.243 - 11695.655: 3.3588% ( 16) 00:07:33.322 11695.655 - 11746.068: 3.5679% ( 17) 00:07:33.322 11746.068 - 11796.480: 3.8386% ( 22) 00:07:33.322 11796.480 - 11846.892: 4.0600% ( 18) 00:07:33.322 11846.892 - 11897.305: 4.3184% ( 21) 00:07:33.322 11897.305 - 11947.717: 4.5153% ( 16) 00:07:33.322 11947.717 - 11998.129: 4.6383% ( 10) 00:07:33.322 11998.129 - 12048.542: 4.7736% ( 11) 00:07:33.322 12048.542 - 12098.954: 4.9090% ( 11) 00:07:33.322 12098.954 - 12149.366: 5.0689% ( 13) 00:07:33.322 12149.366 - 12199.778: 5.2781% ( 17) 00:07:33.322 12199.778 - 12250.191: 5.4380% ( 13) 00:07:33.322 12250.191 - 12300.603: 5.6225% ( 15) 00:07:33.322 12300.603 - 12351.015: 5.8317% ( 17) 00:07:33.322 12351.015 - 12401.428: 6.0901% ( 21) 00:07:33.322 12401.428 - 12451.840: 6.3853% ( 24) 00:07:33.322 12451.840 - 12502.252: 6.5822% ( 16) 00:07:33.322 12502.252 - 12552.665: 6.8406% ( 21) 00:07:33.322 12552.665 - 12603.077: 7.1481% ( 25) 00:07:33.322 12603.077 - 12653.489: 7.4680% ( 26) 00:07:33.322 12653.489 - 12703.902: 7.8002% ( 27) 00:07:33.322 12703.902 - 12754.314: 8.0955% ( 24) 00:07:33.322 12754.314 - 12804.726: 8.4154% ( 26) 00:07:33.322 12804.726 - 12855.138: 8.7352% ( 26) 00:07:33.322 12855.138 - 12905.551: 9.0305% ( 24) 00:07:33.323 12905.551 - 13006.375: 9.7195% ( 56) 00:07:33.323 13006.375 - 13107.200: 10.3716% ( 53) 00:07:33.323 13107.200 - 13208.025: 11.1590% ( 64) 00:07:33.323 13208.025 - 13308.849: 12.0079% ( 69) 00:07:33.323 13308.849 - 13409.674: 12.9429% ( 76) 00:07:33.323 13409.674 - 13510.498: 14.1363% ( 97) 00:07:33.323 13510.498 - 13611.323: 15.6619% ( 124) 00:07:33.323 13611.323 - 13712.148: 17.0399% ( 112) 00:07:33.323 13712.148 - 13812.972: 18.3071% ( 103) 00:07:33.323 13812.972 - 13913.797: 19.6973% ( 113) 00:07:33.323 13913.797 - 14014.622: 21.2229% ( 124) 00:07:33.323 14014.622 - 14115.446: 22.8100% ( 129) 00:07:33.323 14115.446 - 14216.271: 24.3848% ( 128) 00:07:33.323 14216.271 - 14317.095: 25.9966% ( 131) 00:07:33.323 14317.095 - 14417.920: 27.4237% ( 116) 00:07:33.323 14417.920 - 14518.745: 29.1708% ( 142) 00:07:33.323 14518.745 - 14619.569: 30.9670% ( 146) 00:07:33.323 14619.569 - 14720.394: 32.7879% ( 148) 00:07:33.323 14720.394 - 14821.218: 34.6334% ( 150) 00:07:33.323 14821.218 - 14922.043: 36.4542% ( 148) 00:07:33.323 14922.043 - 15022.868: 38.3120% ( 151) 00:07:33.323 15022.868 - 15123.692: 40.3912% ( 169) 00:07:33.323 15123.692 - 15224.517: 42.1875% ( 146) 00:07:33.323 15224.517 - 15325.342: 43.9099% ( 140) 00:07:33.323 15325.342 - 15426.166: 45.8169% ( 155) 00:07:33.323 15426.166 - 15526.991: 47.9946% ( 177) 00:07:33.323 15526.991 - 15627.815: 50.1107% ( 172) 00:07:33.323 15627.815 - 15728.640: 52.2269% ( 172) 00:07:33.323 15728.640 - 15829.465: 54.4783% ( 183) 00:07:33.323 15829.465 - 15930.289: 56.8775% ( 195) 00:07:33.323 15930.289 - 16031.114: 59.1658% ( 186) 00:07:33.323 16031.114 - 16131.938: 61.5527% ( 194) 00:07:33.323 16131.938 - 16232.763: 64.1363% ( 210) 00:07:33.323 16232.763 - 16333.588: 66.6216% ( 202) 00:07:33.323 16333.588 - 16434.412: 68.9099% ( 186) 00:07:33.323 16434.412 - 16535.237: 71.0138% ( 171) 00:07:33.323 16535.237 - 16636.062: 73.2037% ( 178) 00:07:33.323 16636.062 - 16736.886: 75.0369% ( 149) 00:07:33.323 16736.886 - 16837.711: 76.8578% ( 148) 00:07:33.323 16837.711 - 16938.535: 78.4572% ( 130) 00:07:33.323 16938.535 - 17039.360: 80.0566% ( 130) 00:07:33.323 17039.360 - 17140.185: 81.3730% ( 107) 00:07:33.323 17140.185 - 17241.009: 82.6156% ( 101) 00:07:33.323 17241.009 - 17341.834: 83.5384% ( 75) 00:07:33.323 17341.834 - 17442.658: 84.3873% ( 69) 00:07:33.323 17442.658 - 17543.483: 85.2854% ( 73) 00:07:33.323 17543.483 - 17644.308: 86.1220% ( 68) 00:07:33.323 17644.308 - 17745.132: 86.7741% ( 53) 00:07:33.323 17745.132 - 17845.957: 87.4139% ( 52) 00:07:33.323 17845.957 - 17946.782: 88.2505% ( 68) 00:07:33.323 17946.782 - 18047.606: 89.1855% ( 76) 00:07:33.323 18047.606 - 18148.431: 89.8991% ( 58) 00:07:33.323 18148.431 - 18249.255: 90.5758% ( 55) 00:07:33.323 18249.255 - 18350.080: 91.3755% ( 65) 00:07:33.323 18350.080 - 18450.905: 92.0645% ( 56) 00:07:33.323 18450.905 - 18551.729: 92.7657% ( 57) 00:07:33.323 18551.729 - 18652.554: 93.4424% ( 55) 00:07:33.323 18652.554 - 18753.378: 94.2175% ( 63) 00:07:33.323 18753.378 - 18854.203: 94.8081% ( 48) 00:07:33.323 18854.203 - 18955.028: 95.2141% ( 33) 00:07:33.323 18955.028 - 19055.852: 95.5463% ( 27) 00:07:33.323 19055.852 - 19156.677: 95.8415% ( 24) 00:07:33.323 19156.677 - 19257.502: 96.1614% ( 26) 00:07:33.323 19257.502 - 19358.326: 96.4444% ( 23) 00:07:33.323 19358.326 - 19459.151: 96.6289% ( 15) 00:07:33.323 19459.151 - 19559.975: 96.7274% ( 8) 00:07:33.323 19559.975 - 19660.800: 96.8012% ( 6) 00:07:33.323 19660.800 - 19761.625: 96.9611% ( 13) 00:07:33.323 19761.625 - 19862.449: 97.3179% ( 29) 00:07:33.323 19862.449 - 19963.274: 97.4532% ( 11) 00:07:33.323 19963.274 - 20064.098: 97.5517% ( 8) 00:07:33.323 20064.098 - 20164.923: 97.6624% ( 9) 00:07:33.323 20164.923 - 20265.748: 97.8100% ( 12) 00:07:33.323 20265.748 - 20366.572: 97.9331% ( 10) 00:07:33.323 20366.572 - 20467.397: 98.0561% ( 10) 00:07:33.323 20467.397 - 20568.222: 98.1545% ( 8) 00:07:33.323 20568.222 - 20669.046: 98.2530% ( 8) 00:07:33.323 20669.046 - 20769.871: 98.3145% ( 5) 00:07:33.323 20769.871 - 20870.695: 98.3637% ( 4) 00:07:33.323 20870.695 - 20971.520: 98.4252% ( 5) 00:07:33.323 25811.102 - 26012.751: 98.4498% ( 2) 00:07:33.323 26012.751 - 26214.400: 98.5236% ( 6) 00:07:33.323 26214.400 - 26416.049: 98.5974% ( 6) 00:07:33.323 26416.049 - 26617.698: 98.6836% ( 7) 00:07:33.323 26617.698 - 26819.348: 98.7574% ( 6) 00:07:33.323 26819.348 - 27020.997: 98.8312% ( 6) 00:07:33.323 27020.997 - 27222.646: 98.9050% ( 6) 00:07:33.323 27222.646 - 27424.295: 98.9788% ( 6) 00:07:33.323 27424.295 - 27625.945: 99.0527% ( 6) 00:07:33.323 27625.945 - 27827.594: 99.1388% ( 7) 00:07:33.323 27827.594 - 28029.243: 99.2126% ( 6) 00:07:33.323 37305.108 - 37506.757: 99.2372% ( 2) 00:07:33.323 37506.757 - 37708.406: 99.2987% ( 5) 00:07:33.323 37708.406 - 37910.055: 99.3602% ( 5) 00:07:33.323 37910.055 - 38111.705: 99.4341% ( 6) 00:07:33.323 38111.705 - 38313.354: 99.4956% ( 5) 00:07:33.323 38313.354 - 38515.003: 99.5571% ( 5) 00:07:33.323 38515.003 - 38716.652: 99.6186% ( 5) 00:07:33.323 38716.652 - 38918.302: 99.6924% ( 6) 00:07:33.323 38918.302 - 39119.951: 99.7539% ( 5) 00:07:33.323 39119.951 - 39321.600: 99.8155% ( 5) 00:07:33.323 39321.600 - 39523.249: 99.8770% ( 5) 00:07:33.323 39523.249 - 39724.898: 99.9385% ( 5) 00:07:33.323 39724.898 - 39926.548: 100.0000% ( 5) 00:07:33.323 00:07:33.323 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.323 ============================================================================== 00:07:33.323 Range in us Cumulative IO count 00:07:33.323 9779.988 - 9830.400: 0.0369% ( 3) 00:07:33.323 9830.400 - 9880.812: 0.0738% ( 3) 00:07:33.323 9880.812 - 9931.225: 0.0984% ( 2) 00:07:33.323 9931.225 - 9981.637: 0.1353% ( 3) 00:07:33.323 9981.637 - 10032.049: 0.1722% ( 3) 00:07:33.323 10032.049 - 10082.462: 0.2092% ( 3) 00:07:33.323 10082.462 - 10132.874: 0.2461% ( 3) 00:07:33.323 10132.874 - 10183.286: 0.2707% ( 2) 00:07:33.323 10183.286 - 10233.698: 0.3076% ( 3) 00:07:33.323 10233.698 - 10284.111: 0.3445% ( 3) 00:07:33.323 10284.111 - 10334.523: 0.3814% ( 3) 00:07:33.323 10334.523 - 10384.935: 0.4183% ( 3) 00:07:33.323 10384.935 - 10435.348: 0.4552% ( 3) 00:07:33.323 10435.348 - 10485.760: 0.5536% ( 8) 00:07:33.323 10485.760 - 10536.172: 0.6398% ( 7) 00:07:33.323 10536.172 - 10586.585: 0.7259% ( 7) 00:07:33.323 10586.585 - 10636.997: 0.8120% ( 7) 00:07:33.323 10636.997 - 10687.409: 0.8858% ( 6) 00:07:33.323 10687.409 - 10737.822: 0.9473% ( 5) 00:07:33.323 10737.822 - 10788.234: 1.0581% ( 9) 00:07:33.323 10788.234 - 10838.646: 1.1565% ( 8) 00:07:33.323 10838.646 - 10889.058: 1.2672% ( 9) 00:07:33.323 10889.058 - 10939.471: 1.3780% ( 9) 00:07:33.323 10939.471 - 10989.883: 1.4518% ( 6) 00:07:33.323 10989.883 - 11040.295: 1.5748% ( 10) 00:07:33.323 11040.295 - 11090.708: 1.6978% ( 10) 00:07:33.323 11090.708 - 11141.120: 1.8701% ( 14) 00:07:33.323 11141.120 - 11191.532: 2.0054% ( 11) 00:07:33.323 11191.532 - 11241.945: 2.1161% ( 9) 00:07:33.323 11241.945 - 11292.357: 2.2146% ( 8) 00:07:33.323 11292.357 - 11342.769: 2.3130% ( 8) 00:07:33.323 11342.769 - 11393.182: 2.4114% ( 8) 00:07:33.323 11393.182 - 11443.594: 2.5468% ( 11) 00:07:33.323 11443.594 - 11494.006: 2.6575% ( 9) 00:07:33.323 11494.006 - 11544.418: 2.7805% ( 10) 00:07:33.323 11544.418 - 11594.831: 2.9035% ( 10) 00:07:33.323 11594.831 - 11645.243: 3.0389% ( 11) 00:07:33.323 11645.243 - 11695.655: 3.1742% ( 11) 00:07:33.323 11695.655 - 11746.068: 3.2849% ( 9) 00:07:33.323 11746.068 - 11796.480: 3.3834% ( 8) 00:07:33.323 11796.480 - 11846.892: 3.5064% ( 10) 00:07:33.323 11846.892 - 11897.305: 3.6294% ( 10) 00:07:33.323 11897.305 - 11947.717: 3.7279% ( 8) 00:07:33.323 11947.717 - 11998.129: 3.8140% ( 7) 00:07:33.323 11998.129 - 12048.542: 3.9370% ( 10) 00:07:33.323 12048.542 - 12098.954: 4.1093% ( 14) 00:07:33.323 12098.954 - 12149.366: 4.2446% ( 11) 00:07:33.323 12149.366 - 12199.778: 4.3676% ( 10) 00:07:33.323 12199.778 - 12250.191: 4.5276% ( 13) 00:07:33.323 12250.191 - 12300.603: 4.7982% ( 22) 00:07:33.323 12300.603 - 12351.015: 5.2288% ( 35) 00:07:33.323 12351.015 - 12401.428: 5.5610% ( 27) 00:07:33.323 12401.428 - 12451.840: 5.8686% ( 25) 00:07:33.323 12451.840 - 12502.252: 6.2131% ( 28) 00:07:33.323 12502.252 - 12552.665: 6.5453% ( 27) 00:07:33.323 12552.665 - 12603.077: 6.8898% ( 28) 00:07:33.323 12603.077 - 12653.489: 7.2589% ( 30) 00:07:33.323 12653.489 - 12703.902: 7.6895% ( 35) 00:07:33.323 12703.902 - 12754.314: 8.1570% ( 38) 00:07:33.323 12754.314 - 12804.726: 8.6122% ( 37) 00:07:33.323 12804.726 - 12855.138: 9.1658% ( 45) 00:07:33.323 12855.138 - 12905.551: 9.8425% ( 55) 00:07:33.323 12905.551 - 13006.375: 10.9867% ( 93) 00:07:33.323 13006.375 - 13107.200: 12.1432% ( 94) 00:07:33.323 13107.200 - 13208.025: 13.3612% ( 99) 00:07:33.323 13208.025 - 13308.849: 14.4931% ( 92) 00:07:33.323 13308.849 - 13409.674: 15.4774% ( 80) 00:07:33.323 13409.674 - 13510.498: 16.4616% ( 80) 00:07:33.323 13510.498 - 13611.323: 17.4090% ( 77) 00:07:33.323 13611.323 - 13712.148: 18.5162% ( 90) 00:07:33.323 13712.148 - 13812.972: 19.5374% ( 83) 00:07:33.323 13812.972 - 13913.797: 20.5094% ( 79) 00:07:33.323 13913.797 - 14014.622: 21.5182% ( 82) 00:07:33.323 14014.622 - 14115.446: 22.8716% ( 110) 00:07:33.323 14115.446 - 14216.271: 24.2618% ( 113) 00:07:33.323 14216.271 - 14317.095: 25.7382% ( 120) 00:07:33.323 14317.095 - 14417.920: 27.3745% ( 133) 00:07:33.323 14417.920 - 14518.745: 28.9616% ( 129) 00:07:33.323 14518.745 - 14619.569: 30.3765% ( 115) 00:07:33.323 14619.569 - 14720.394: 31.9513% ( 128) 00:07:33.323 14720.394 - 14821.218: 33.7721% ( 148) 00:07:33.323 14821.218 - 14922.043: 35.6545% ( 153) 00:07:33.323 14922.043 - 15022.868: 37.7461% ( 170) 00:07:33.323 15022.868 - 15123.692: 39.7761% ( 165) 00:07:33.323 15123.692 - 15224.517: 41.7569% ( 161) 00:07:33.323 15224.517 - 15325.342: 43.7992% ( 166) 00:07:33.323 15325.342 - 15426.166: 45.6939% ( 154) 00:07:33.323 15426.166 - 15526.991: 47.7239% ( 165) 00:07:33.323 15526.991 - 15627.815: 49.8893% ( 176) 00:07:33.323 15627.815 - 15728.640: 52.3868% ( 203) 00:07:33.323 15728.640 - 15829.465: 54.9705% ( 210) 00:07:33.323 15829.465 - 15930.289: 57.0620% ( 170) 00:07:33.323 15930.289 - 16031.114: 59.2520% ( 178) 00:07:33.323 16031.114 - 16131.938: 61.3189% ( 168) 00:07:33.323 16131.938 - 16232.763: 63.4719% ( 175) 00:07:33.323 16232.763 - 16333.588: 65.4774% ( 163) 00:07:33.323 16333.588 - 16434.412: 67.3228% ( 150) 00:07:33.323 16434.412 - 16535.237: 69.0453% ( 140) 00:07:33.323 16535.237 - 16636.062: 70.8661% ( 148) 00:07:33.323 16636.062 - 16736.886: 72.7977% ( 157) 00:07:33.323 16736.886 - 16837.711: 74.5202% ( 140) 00:07:33.323 16837.711 - 16938.535: 76.2672% ( 142) 00:07:33.323 16938.535 - 17039.360: 77.9035% ( 133) 00:07:33.323 17039.360 - 17140.185: 79.5399% ( 133) 00:07:33.323 17140.185 - 17241.009: 81.0408% ( 122) 00:07:33.323 17241.009 - 17341.834: 82.3204% ( 104) 00:07:33.323 17341.834 - 17442.658: 83.6614% ( 109) 00:07:33.323 17442.658 - 17543.483: 84.9163% ( 102) 00:07:33.323 17543.483 - 17644.308: 85.9375% ( 83) 00:07:33.323 17644.308 - 17745.132: 87.0940% ( 94) 00:07:33.323 17745.132 - 17845.957: 88.1890% ( 89) 00:07:33.323 17845.957 - 17946.782: 89.2963% ( 90) 00:07:33.323 17946.782 - 18047.606: 90.4774% ( 96) 00:07:33.323 18047.606 - 18148.431: 91.4370% ( 78) 00:07:33.323 18148.431 - 18249.255: 92.3474% ( 74) 00:07:33.323 18249.255 - 18350.080: 93.1718% ( 67) 00:07:33.323 18350.080 - 18450.905: 93.9592% ( 64) 00:07:33.323 18450.905 - 18551.729: 94.5620% ( 49) 00:07:33.323 18551.729 - 18652.554: 95.2141% ( 53) 00:07:33.323 18652.554 - 18753.378: 95.7062% ( 40) 00:07:33.323 18753.378 - 18854.203: 96.0876% ( 31) 00:07:33.323 18854.203 - 18955.028: 96.3337% ( 20) 00:07:33.323 18955.028 - 19055.852: 96.5551% ( 18) 00:07:33.323 19055.852 - 19156.677: 96.7520% ( 16) 00:07:33.323 19156.677 - 19257.502: 96.9365% ( 15) 00:07:33.323 19257.502 - 19358.326: 97.0472% ( 9) 00:07:33.323 19358.326 - 19459.151: 97.0842% ( 3) 00:07:33.323 19459.151 - 19559.975: 97.1088% ( 2) 00:07:33.323 19559.975 - 19660.800: 97.1334% ( 2) 00:07:33.323 19660.800 - 19761.625: 97.1703% ( 3) 00:07:33.323 19761.625 - 19862.449: 97.2072% ( 3) 00:07:33.323 19862.449 - 19963.274: 97.2318% ( 2) 00:07:33.323 19963.274 - 20064.098: 97.3302% ( 8) 00:07:33.323 20064.098 - 20164.923: 97.4409% ( 9) 00:07:33.323 20164.923 - 20265.748: 97.7362% ( 24) 00:07:33.323 20265.748 - 20366.572: 97.9208% ( 15) 00:07:33.323 20366.572 - 20467.397: 98.0930% ( 14) 00:07:33.323 20467.397 - 20568.222: 98.2406% ( 12) 00:07:33.323 20568.222 - 20669.046: 98.2899% ( 4) 00:07:33.323 20669.046 - 20769.871: 98.3268% ( 3) 00:07:33.324 20769.871 - 20870.695: 98.3760% ( 4) 00:07:33.324 20870.695 - 20971.520: 98.4129% ( 3) 00:07:33.324 20971.520 - 21072.345: 98.4252% ( 1) 00:07:33.324 25306.978 - 25407.803: 98.4498% ( 2) 00:07:33.324 25407.803 - 25508.628: 98.4867% ( 3) 00:07:33.324 25508.628 - 25609.452: 98.5236% ( 3) 00:07:33.324 25609.452 - 25710.277: 98.5605% ( 3) 00:07:33.324 25710.277 - 25811.102: 98.5974% ( 3) 00:07:33.324 25811.102 - 26012.751: 98.6713% ( 6) 00:07:33.324 26012.751 - 26214.400: 98.7574% ( 7) 00:07:33.324 26214.400 - 26416.049: 98.8189% ( 5) 00:07:33.324 26416.049 - 26617.698: 98.8804% ( 5) 00:07:33.324 26617.698 - 26819.348: 98.9419% ( 5) 00:07:33.324 26819.348 - 27020.997: 98.9911% ( 4) 00:07:33.324 27020.997 - 27222.646: 99.0650% ( 6) 00:07:33.324 27222.646 - 27424.295: 99.1265% ( 5) 00:07:33.324 27424.295 - 27625.945: 99.2003% ( 6) 00:07:33.324 27625.945 - 27827.594: 99.2126% ( 1) 00:07:33.324 36901.809 - 37103.458: 99.2741% ( 5) 00:07:33.324 37103.458 - 37305.108: 99.3356% ( 5) 00:07:33.324 37305.108 - 37506.757: 99.3971% ( 5) 00:07:33.324 37506.757 - 37708.406: 99.4464% ( 4) 00:07:33.324 37708.406 - 37910.055: 99.5079% ( 5) 00:07:33.324 37910.055 - 38111.705: 99.5694% ( 5) 00:07:33.324 38111.705 - 38313.354: 99.6309% ( 5) 00:07:33.324 38313.354 - 38515.003: 99.7047% ( 6) 00:07:33.324 38515.003 - 38716.652: 99.7662% ( 5) 00:07:33.324 38716.652 - 38918.302: 99.8278% ( 5) 00:07:33.324 38918.302 - 39119.951: 99.9016% ( 6) 00:07:33.324 39119.951 - 39321.600: 99.9631% ( 5) 00:07:33.324 39321.600 - 39523.249: 100.0000% ( 3) 00:07:33.324 00:07:33.324 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.324 ============================================================================== 00:07:33.324 Range in us Cumulative IO count 00:07:33.324 9124.628 - 9175.040: 0.0366% ( 3) 00:07:33.324 9175.040 - 9225.452: 0.0732% ( 3) 00:07:33.324 9225.452 - 9275.865: 0.1343% ( 5) 00:07:33.324 9275.865 - 9326.277: 0.1831% ( 4) 00:07:33.324 9326.277 - 9376.689: 0.2075% ( 2) 00:07:33.324 9376.689 - 9427.102: 0.2197% ( 1) 00:07:33.324 9427.102 - 9477.514: 0.2686% ( 4) 00:07:33.324 9477.514 - 9527.926: 0.3052% ( 3) 00:07:33.324 9527.926 - 9578.338: 0.3418% ( 3) 00:07:33.324 9578.338 - 9628.751: 0.3662% ( 2) 00:07:33.324 9628.751 - 9679.163: 0.4028% ( 3) 00:07:33.324 9679.163 - 9729.575: 0.4272% ( 2) 00:07:33.324 9729.575 - 9779.988: 0.4639% ( 3) 00:07:33.324 9779.988 - 9830.400: 0.4883% ( 2) 00:07:33.324 9830.400 - 9880.812: 0.5249% ( 3) 00:07:33.324 9880.812 - 9931.225: 0.5493% ( 2) 00:07:33.324 9931.225 - 9981.637: 0.5737% ( 2) 00:07:33.324 9981.637 - 10032.049: 0.6104% ( 3) 00:07:33.324 10032.049 - 10082.462: 0.6470% ( 3) 00:07:33.324 10082.462 - 10132.874: 0.6836% ( 3) 00:07:33.324 10132.874 - 10183.286: 0.7202% ( 3) 00:07:33.324 10183.286 - 10233.698: 0.7568% ( 3) 00:07:33.324 10233.698 - 10284.111: 0.7812% ( 2) 00:07:33.324 10687.409 - 10737.822: 0.8179% ( 3) 00:07:33.324 10737.822 - 10788.234: 0.8545% ( 3) 00:07:33.324 10788.234 - 10838.646: 0.8911% ( 3) 00:07:33.324 10838.646 - 10889.058: 0.9277% ( 3) 00:07:33.324 10889.058 - 10939.471: 0.9766% ( 4) 00:07:33.324 10939.471 - 10989.883: 1.0376% ( 5) 00:07:33.324 10989.883 - 11040.295: 1.0986% ( 5) 00:07:33.324 11040.295 - 11090.708: 1.1841% ( 7) 00:07:33.324 11090.708 - 11141.120: 1.2573% ( 6) 00:07:33.324 11141.120 - 11191.532: 1.3306% ( 6) 00:07:33.324 11191.532 - 11241.945: 1.4160% ( 7) 00:07:33.324 11241.945 - 11292.357: 1.5137% ( 8) 00:07:33.324 11292.357 - 11342.769: 1.5869% ( 6) 00:07:33.324 11342.769 - 11393.182: 1.6602% ( 6) 00:07:33.324 11393.182 - 11443.594: 1.7578% ( 8) 00:07:33.324 11443.594 - 11494.006: 1.8677% ( 9) 00:07:33.324 11494.006 - 11544.418: 1.9653% ( 8) 00:07:33.324 11544.418 - 11594.831: 2.1484% ( 15) 00:07:33.324 11594.831 - 11645.243: 2.2949% ( 12) 00:07:33.324 11645.243 - 11695.655: 2.4414% ( 12) 00:07:33.324 11695.655 - 11746.068: 2.5513% ( 9) 00:07:33.324 11746.068 - 11796.480: 2.6855% ( 11) 00:07:33.324 11796.480 - 11846.892: 2.8320% ( 12) 00:07:33.324 11846.892 - 11897.305: 3.0029% ( 14) 00:07:33.324 11897.305 - 11947.717: 3.1616% ( 13) 00:07:33.324 11947.717 - 11998.129: 3.3447% ( 15) 00:07:33.324 11998.129 - 12048.542: 3.5034% ( 13) 00:07:33.324 12048.542 - 12098.954: 3.6621% ( 13) 00:07:33.324 12098.954 - 12149.366: 3.8086% ( 12) 00:07:33.324 12149.366 - 12199.778: 3.9307% ( 10) 00:07:33.324 12199.778 - 12250.191: 4.0649% ( 11) 00:07:33.324 12250.191 - 12300.603: 4.2114% ( 12) 00:07:33.324 12300.603 - 12351.015: 4.3823% ( 14) 00:07:33.324 12351.015 - 12401.428: 4.6143% ( 19) 00:07:33.324 12401.428 - 12451.840: 4.9683% ( 29) 00:07:33.324 12451.840 - 12502.252: 5.2368% ( 22) 00:07:33.324 12502.252 - 12552.665: 5.4932% ( 21) 00:07:33.324 12552.665 - 12603.077: 5.7495% ( 21) 00:07:33.324 12603.077 - 12653.489: 6.0547% ( 25) 00:07:33.324 12653.489 - 12703.902: 6.3721% ( 26) 00:07:33.324 12703.902 - 12754.314: 6.7017% ( 27) 00:07:33.324 12754.314 - 12804.726: 7.0557% ( 29) 00:07:33.324 12804.726 - 12855.138: 7.4707% ( 34) 00:07:33.324 12855.138 - 12905.551: 7.9346% ( 38) 00:07:33.324 12905.551 - 13006.375: 8.9478% ( 83) 00:07:33.324 13006.375 - 13107.200: 10.1929% ( 102) 00:07:33.324 13107.200 - 13208.025: 11.3647% ( 96) 00:07:33.324 13208.025 - 13308.849: 12.7686% ( 115) 00:07:33.324 13308.849 - 13409.674: 14.1602% ( 114) 00:07:33.324 13409.674 - 13510.498: 15.5518% ( 114) 00:07:33.324 13510.498 - 13611.323: 16.8823% ( 109) 00:07:33.324 13611.323 - 13712.148: 18.2129% ( 109) 00:07:33.324 13712.148 - 13812.972: 19.4824% ( 104) 00:07:33.324 13812.972 - 13913.797: 20.7397% ( 103) 00:07:33.324 13913.797 - 14014.622: 21.9604% ( 100) 00:07:33.324 14014.622 - 14115.446: 23.0103% ( 86) 00:07:33.324 14115.446 - 14216.271: 24.0112% ( 82) 00:07:33.324 14216.271 - 14317.095: 25.0854% ( 88) 00:07:33.324 14317.095 - 14417.920: 26.2207% ( 93) 00:07:33.324 14417.920 - 14518.745: 27.3926% ( 96) 00:07:33.324 14518.745 - 14619.569: 28.8696% ( 121) 00:07:33.324 14619.569 - 14720.394: 30.5908% ( 141) 00:07:33.324 14720.394 - 14821.218: 32.3242% ( 142) 00:07:33.324 14821.218 - 14922.043: 34.2163% ( 155) 00:07:33.324 14922.043 - 15022.868: 36.4868% ( 186) 00:07:33.324 15022.868 - 15123.692: 39.0015% ( 206) 00:07:33.324 15123.692 - 15224.517: 41.7480% ( 225) 00:07:33.324 15224.517 - 15325.342: 44.6045% ( 234) 00:07:33.324 15325.342 - 15426.166: 47.5952% ( 245) 00:07:33.324 15426.166 - 15526.991: 50.2930% ( 221) 00:07:33.324 15526.991 - 15627.815: 52.6978% ( 197) 00:07:33.324 15627.815 - 15728.640: 55.0293% ( 191) 00:07:33.324 15728.640 - 15829.465: 57.2998% ( 186) 00:07:33.324 15829.465 - 15930.289: 59.5825% ( 187) 00:07:33.324 15930.289 - 16031.114: 61.6089% ( 166) 00:07:33.324 16031.114 - 16131.938: 63.5498% ( 159) 00:07:33.324 16131.938 - 16232.763: 65.2832% ( 142) 00:07:33.324 16232.763 - 16333.588: 66.8701% ( 130) 00:07:33.324 16333.588 - 16434.412: 68.7134% ( 151) 00:07:33.324 16434.412 - 16535.237: 70.5688% ( 152) 00:07:33.324 16535.237 - 16636.062: 72.1924% ( 133) 00:07:33.324 16636.062 - 16736.886: 73.8159% ( 133) 00:07:33.324 16736.886 - 16837.711: 75.2319% ( 116) 00:07:33.324 16837.711 - 16938.535: 76.4526% ( 100) 00:07:33.324 16938.535 - 17039.360: 77.6855% ( 101) 00:07:33.324 17039.360 - 17140.185: 79.0039% ( 108) 00:07:33.324 17140.185 - 17241.009: 80.4199% ( 116) 00:07:33.324 17241.009 - 17341.834: 82.0435% ( 133) 00:07:33.324 17341.834 - 17442.658: 83.7280% ( 138) 00:07:33.324 17442.658 - 17543.483: 84.9121% ( 97) 00:07:33.324 17543.483 - 17644.308: 86.0596% ( 94) 00:07:33.324 17644.308 - 17745.132: 87.1216% ( 87) 00:07:33.324 17745.132 - 17845.957: 88.1958% ( 88) 00:07:33.324 17845.957 - 17946.782: 89.0747% ( 72) 00:07:33.324 17946.782 - 18047.606: 90.1123% ( 85) 00:07:33.324 18047.606 - 18148.431: 91.2354% ( 92) 00:07:33.324 18148.431 - 18249.255: 92.3828% ( 94) 00:07:33.324 18249.255 - 18350.080: 93.4082% ( 84) 00:07:33.324 18350.080 - 18450.905: 94.1895% ( 64) 00:07:33.324 18450.905 - 18551.729: 94.9097% ( 59) 00:07:33.324 18551.729 - 18652.554: 95.4346% ( 43) 00:07:33.324 18652.554 - 18753.378: 95.9473% ( 42) 00:07:33.324 18753.378 - 18854.203: 96.3867% ( 36) 00:07:33.324 18854.203 - 18955.028: 96.7773% ( 32) 00:07:33.324 18955.028 - 19055.852: 97.1558% ( 31) 00:07:33.324 19055.852 - 19156.677: 97.3877% ( 19) 00:07:33.324 19156.677 - 19257.502: 97.5342% ( 12) 00:07:33.324 19257.502 - 19358.326: 97.6807% ( 12) 00:07:33.324 19358.326 - 19459.151: 97.8149% ( 11) 00:07:33.324 19459.151 - 19559.975: 97.9248% ( 9) 00:07:33.324 19559.975 - 19660.800: 98.0713% ( 12) 00:07:33.324 19660.800 - 19761.625: 98.2788% ( 17) 00:07:33.324 19761.625 - 19862.449: 98.3643% ( 7) 00:07:33.324 19862.449 - 19963.274: 98.4497% ( 7) 00:07:33.324 19963.274 - 20064.098: 98.5474% ( 8) 00:07:33.324 20064.098 - 20164.923: 98.6572% ( 9) 00:07:33.324 20164.923 - 20265.748: 98.7671% ( 9) 00:07:33.324 20265.748 - 20366.572: 98.8770% ( 9) 00:07:33.324 20366.572 - 20467.397: 98.9746% ( 8) 00:07:33.324 20467.397 - 20568.222: 99.0845% ( 9) 00:07:33.324 20568.222 - 20669.046: 99.1333% ( 4) 00:07:33.324 20669.046 - 20769.871: 99.1577% ( 2) 00:07:33.324 20769.871 - 20870.695: 99.1821% ( 2) 00:07:33.324 20870.695 - 20971.520: 99.2065% ( 2) 00:07:33.324 20971.520 - 21072.345: 99.2188% ( 1) 00:07:33.324 25508.628 - 25609.452: 99.2554% ( 3) 00:07:33.324 25609.452 - 25710.277: 99.2676% ( 1) 00:07:33.324 25710.277 - 25811.102: 99.3042% ( 3) 00:07:33.324 25811.102 - 26012.751: 99.3652% ( 5) 00:07:33.324 26012.751 - 26214.400: 99.4385% ( 6) 00:07:33.324 26214.400 - 26416.049: 99.4995% ( 5) 00:07:33.324 26416.049 - 26617.698: 99.5605% ( 5) 00:07:33.324 26617.698 - 26819.348: 99.6216% ( 5) 00:07:33.324 26819.348 - 27020.997: 99.6826% ( 5) 00:07:33.324 27020.997 - 27222.646: 99.7437% ( 5) 00:07:33.324 27222.646 - 27424.295: 99.8169% ( 6) 00:07:33.324 27424.295 - 27625.945: 99.8779% ( 5) 00:07:33.324 27625.945 - 27827.594: 99.9390% ( 5) 00:07:33.324 27827.594 - 28029.243: 100.0000% ( 5) 00:07:33.324 00:07:33.324 17:24:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:34.710 Initializing NVMe Controllers 00:07:34.710 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.710 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.710 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.710 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.710 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.710 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.710 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.710 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.710 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.710 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.710 Initialization complete. Launching workers. 00:07:34.710 ======================================================== 00:07:34.710 Latency(us) 00:07:34.710 Device Information : IOPS MiB/s Average min max 00:07:34.710 PCIE (0000:00:10.0) NSID 1 from core 0: 9637.08 112.93 13301.72 9635.70 36115.59 00:07:34.710 PCIE (0000:00:11.0) NSID 1 from core 0: 9637.08 112.93 13281.95 9581.74 34429.20 00:07:34.710 PCIE (0000:00:13.0) NSID 1 from core 0: 9637.08 112.93 13262.01 9323.89 34120.38 00:07:34.710 PCIE (0000:00:12.0) NSID 1 from core 0: 9637.08 112.93 13242.23 10055.37 32968.24 00:07:34.710 PCIE (0000:00:12.0) NSID 2 from core 0: 9637.08 112.93 13222.28 9494.90 31963.63 00:07:34.710 PCIE (0000:00:12.0) NSID 3 from core 0: 9700.91 113.68 13115.42 8930.92 23332.12 00:07:34.710 ======================================================== 00:07:34.710 Total : 57886.32 678.36 13237.47 8930.92 36115.59 00:07:34.710 00:07:34.710 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.710 ================================================================================= 00:07:34.710 1.00000% : 10284.111us 00:07:34.710 10.00000% : 11040.295us 00:07:34.710 25.00000% : 11846.892us 00:07:34.710 50.00000% : 13006.375us 00:07:34.710 75.00000% : 14216.271us 00:07:34.710 90.00000% : 15627.815us 00:07:34.710 95.00000% : 16535.237us 00:07:34.710 98.00000% : 17442.658us 00:07:34.710 99.00000% : 27625.945us 00:07:34.710 99.50000% : 34683.668us 00:07:34.710 99.90000% : 35893.563us 00:07:34.710 99.99000% : 36296.862us 00:07:34.710 99.99900% : 36296.862us 00:07:34.710 99.99990% : 36296.862us 00:07:34.710 99.99999% : 36296.862us 00:07:34.710 00:07:34.710 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.710 ================================================================================= 00:07:34.710 1.00000% : 10334.523us 00:07:34.710 10.00000% : 10989.883us 00:07:34.710 25.00000% : 11846.892us 00:07:34.710 50.00000% : 13006.375us 00:07:34.710 75.00000% : 14216.271us 00:07:34.710 90.00000% : 15627.815us 00:07:34.710 95.00000% : 16434.412us 00:07:34.710 98.00000% : 17543.483us 00:07:34.710 99.00000% : 26214.400us 00:07:34.710 99.50000% : 33473.772us 00:07:34.710 99.90000% : 34280.369us 00:07:34.710 99.99000% : 34482.018us 00:07:34.710 99.99900% : 34482.018us 00:07:34.710 99.99990% : 34482.018us 00:07:34.710 99.99999% : 34482.018us 00:07:34.710 00:07:34.710 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.710 ================================================================================= 00:07:34.710 1.00000% : 10032.049us 00:07:34.710 10.00000% : 10889.058us 00:07:34.710 25.00000% : 11746.068us 00:07:34.710 50.00000% : 13006.375us 00:07:34.710 75.00000% : 14317.095us 00:07:34.710 90.00000% : 15526.991us 00:07:34.710 95.00000% : 16232.763us 00:07:34.710 98.00000% : 17442.658us 00:07:34.710 99.00000% : 25710.277us 00:07:34.710 99.50000% : 33070.474us 00:07:34.710 99.90000% : 34078.720us 00:07:34.710 99.99000% : 34280.369us 00:07:34.710 99.99900% : 34280.369us 00:07:34.711 99.99990% : 34280.369us 00:07:34.711 99.99999% : 34280.369us 00:07:34.711 00:07:34.711 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.711 ================================================================================= 00:07:34.711 1.00000% : 10334.523us 00:07:34.711 10.00000% : 10939.471us 00:07:34.711 25.00000% : 11846.892us 00:07:34.711 50.00000% : 13006.375us 00:07:34.711 75.00000% : 14216.271us 00:07:34.711 90.00000% : 15627.815us 00:07:34.711 95.00000% : 16232.763us 00:07:34.711 98.00000% : 17241.009us 00:07:34.711 99.00000% : 23996.258us 00:07:34.711 99.50000% : 31860.578us 00:07:34.711 99.90000% : 32868.825us 00:07:34.711 99.99000% : 33070.474us 00:07:34.711 99.99900% : 33070.474us 00:07:34.711 99.99990% : 33070.474us 00:07:34.711 99.99999% : 33070.474us 00:07:34.711 00:07:34.711 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.711 ================================================================================= 00:07:34.711 1.00000% : 10032.049us 00:07:34.711 10.00000% : 11090.708us 00:07:34.711 25.00000% : 11796.480us 00:07:34.711 50.00000% : 13006.375us 00:07:34.711 75.00000% : 14216.271us 00:07:34.711 90.00000% : 15526.991us 00:07:34.711 95.00000% : 16434.412us 00:07:34.711 98.00000% : 17140.185us 00:07:34.711 99.00000% : 22887.188us 00:07:34.711 99.50000% : 30852.332us 00:07:34.711 99.90000% : 31860.578us 00:07:34.711 99.99000% : 32062.228us 00:07:34.711 99.99900% : 32062.228us 00:07:34.711 99.99990% : 32062.228us 00:07:34.711 99.99999% : 32062.228us 00:07:34.711 00:07:34.711 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.711 ================================================================================= 00:07:34.711 1.00000% : 9830.400us 00:07:34.711 10.00000% : 11090.708us 00:07:34.711 25.00000% : 11746.068us 00:07:34.711 50.00000% : 13107.200us 00:07:34.711 75.00000% : 14115.446us 00:07:34.711 90.00000% : 15526.991us 00:07:34.711 95.00000% : 16333.588us 00:07:34.711 98.00000% : 17039.360us 00:07:34.711 99.00000% : 17341.834us 00:07:34.711 99.50000% : 22181.415us 00:07:34.711 99.90000% : 23189.662us 00:07:34.711 99.99000% : 23391.311us 00:07:34.711 99.99900% : 23391.311us 00:07:34.711 99.99990% : 23391.311us 00:07:34.711 99.99999% : 23391.311us 00:07:34.711 00:07:34.711 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.711 ============================================================================== 00:07:34.711 Range in us Cumulative IO count 00:07:34.711 9628.751 - 9679.163: 0.0414% ( 4) 00:07:34.711 9679.163 - 9729.575: 0.0828% ( 4) 00:07:34.711 9729.575 - 9779.988: 0.0931% ( 1) 00:07:34.711 9779.988 - 9830.400: 0.1863% ( 9) 00:07:34.711 9830.400 - 9880.812: 0.2587% ( 7) 00:07:34.711 9880.812 - 9931.225: 0.3725% ( 11) 00:07:34.711 9931.225 - 9981.637: 0.5381% ( 16) 00:07:34.711 9981.637 - 10032.049: 0.6209% ( 8) 00:07:34.711 10032.049 - 10082.462: 0.7036% ( 8) 00:07:34.711 10082.462 - 10132.874: 0.7761% ( 7) 00:07:34.711 10132.874 - 10183.286: 0.8692% ( 9) 00:07:34.711 10183.286 - 10233.698: 0.9416% ( 7) 00:07:34.711 10233.698 - 10284.111: 1.1072% ( 16) 00:07:34.711 10284.111 - 10334.523: 1.1589% ( 5) 00:07:34.711 10334.523 - 10384.935: 1.3349% ( 17) 00:07:34.711 10384.935 - 10435.348: 1.7798% ( 43) 00:07:34.711 10435.348 - 10485.760: 2.2351% ( 44) 00:07:34.711 10485.760 - 10536.172: 2.8146% ( 56) 00:07:34.711 10536.172 - 10586.585: 3.4147% ( 58) 00:07:34.711 10586.585 - 10636.997: 4.2425% ( 80) 00:07:34.711 10636.997 - 10687.409: 5.0083% ( 74) 00:07:34.711 10687.409 - 10737.822: 5.8568% ( 82) 00:07:34.711 10737.822 - 10788.234: 6.6950% ( 81) 00:07:34.711 10788.234 - 10838.646: 7.6676% ( 94) 00:07:34.711 10838.646 - 10889.058: 8.5472% ( 85) 00:07:34.711 10889.058 - 10939.471: 9.0335% ( 47) 00:07:34.711 10939.471 - 10989.883: 9.7993% ( 74) 00:07:34.711 10989.883 - 11040.295: 10.4925% ( 67) 00:07:34.711 11040.295 - 11090.708: 11.2479% ( 73) 00:07:34.711 11090.708 - 11141.120: 12.1689% ( 89) 00:07:34.711 11141.120 - 11191.532: 13.2243% ( 102) 00:07:34.711 11191.532 - 11241.945: 14.4661% ( 120) 00:07:34.711 11241.945 - 11292.357: 15.5112% ( 101) 00:07:34.711 11292.357 - 11342.769: 16.4632% ( 92) 00:07:34.711 11342.769 - 11393.182: 17.4565% ( 96) 00:07:34.711 11393.182 - 11443.594: 18.6362% ( 114) 00:07:34.711 11443.594 - 11494.006: 19.5468% ( 88) 00:07:34.711 11494.006 - 11544.418: 20.5815% ( 100) 00:07:34.711 11544.418 - 11594.831: 21.4094% ( 80) 00:07:34.711 11594.831 - 11645.243: 22.0613% ( 63) 00:07:34.711 11645.243 - 11695.655: 23.1374% ( 104) 00:07:34.711 11695.655 - 11746.068: 23.7790% ( 62) 00:07:34.711 11746.068 - 11796.480: 24.7206% ( 91) 00:07:34.711 11796.480 - 11846.892: 25.8071% ( 105) 00:07:34.711 11846.892 - 11897.305: 26.8729% ( 103) 00:07:34.711 11897.305 - 11947.717: 27.9180% ( 101) 00:07:34.711 11947.717 - 11998.129: 29.0149% ( 106) 00:07:34.711 11998.129 - 12048.542: 30.0704% ( 102) 00:07:34.711 12048.542 - 12098.954: 31.3638% ( 125) 00:07:34.711 12098.954 - 12149.366: 32.3365% ( 94) 00:07:34.711 12149.366 - 12199.778: 33.3506% ( 98) 00:07:34.711 12199.778 - 12250.191: 34.9338% ( 153) 00:07:34.711 12250.191 - 12300.603: 36.3514% ( 137) 00:07:34.711 12300.603 - 12351.015: 37.6035% ( 121) 00:07:34.711 12351.015 - 12401.428: 38.7210% ( 108) 00:07:34.711 12401.428 - 12451.840: 39.6627% ( 91) 00:07:34.711 12451.840 - 12502.252: 40.5422% ( 85) 00:07:34.711 12502.252 - 12552.665: 41.5459% ( 97) 00:07:34.711 12552.665 - 12603.077: 42.3738% ( 80) 00:07:34.711 12603.077 - 12653.489: 43.3775% ( 97) 00:07:34.711 12653.489 - 12703.902: 44.3088% ( 90) 00:07:34.711 12703.902 - 12754.314: 45.1469% ( 81) 00:07:34.711 12754.314 - 12804.726: 46.1403% ( 96) 00:07:34.711 12804.726 - 12855.138: 46.8853% ( 72) 00:07:34.711 12855.138 - 12905.551: 48.0753% ( 115) 00:07:34.711 12905.551 - 13006.375: 50.3829% ( 223) 00:07:34.711 13006.375 - 13107.200: 53.0112% ( 254) 00:07:34.711 13107.200 - 13208.025: 55.5257% ( 243) 00:07:34.711 13208.025 - 13308.849: 57.9263% ( 232) 00:07:34.711 13308.849 - 13409.674: 60.1614% ( 216) 00:07:34.711 13409.674 - 13510.498: 62.5724% ( 233) 00:07:34.711 13510.498 - 13611.323: 65.0352% ( 238) 00:07:34.711 13611.323 - 13712.148: 67.2289% ( 212) 00:07:34.711 13712.148 - 13812.972: 69.1846% ( 189) 00:07:34.711 13812.972 - 13913.797: 70.9644% ( 172) 00:07:34.711 13913.797 - 14014.622: 72.9098% ( 188) 00:07:34.712 14014.622 - 14115.446: 74.5861% ( 162) 00:07:34.712 14115.446 - 14216.271: 76.1279% ( 149) 00:07:34.712 14216.271 - 14317.095: 77.3903% ( 122) 00:07:34.712 14317.095 - 14417.920: 78.6734% ( 124) 00:07:34.712 14417.920 - 14518.745: 79.9565% ( 124) 00:07:34.712 14518.745 - 14619.569: 81.1672% ( 117) 00:07:34.712 14619.569 - 14720.394: 82.5021% ( 129) 00:07:34.712 14720.394 - 14821.218: 83.5058% ( 97) 00:07:34.712 14821.218 - 14922.043: 84.5199% ( 98) 00:07:34.712 14922.043 - 15022.868: 85.6478% ( 109) 00:07:34.712 15022.868 - 15123.692: 86.4756% ( 80) 00:07:34.712 15123.692 - 15224.517: 87.4276% ( 92) 00:07:34.712 15224.517 - 15325.342: 88.3796% ( 92) 00:07:34.712 15325.342 - 15426.166: 89.1246% ( 72) 00:07:34.712 15426.166 - 15526.991: 89.7972% ( 65) 00:07:34.712 15526.991 - 15627.815: 90.4491% ( 63) 00:07:34.712 15627.815 - 15728.640: 91.1217% ( 65) 00:07:34.712 15728.640 - 15829.465: 91.8253% ( 68) 00:07:34.712 15829.465 - 15930.289: 92.5186% ( 67) 00:07:34.712 15930.289 - 16031.114: 93.2016% ( 66) 00:07:34.712 16031.114 - 16131.938: 93.6465% ( 43) 00:07:34.712 16131.938 - 16232.763: 94.2467% ( 58) 00:07:34.712 16232.763 - 16333.588: 94.5261% ( 27) 00:07:34.712 16333.588 - 16434.412: 94.9193% ( 38) 00:07:34.712 16434.412 - 16535.237: 95.4367% ( 50) 00:07:34.712 16535.237 - 16636.062: 95.8402% ( 39) 00:07:34.712 16636.062 - 16736.886: 96.3576% ( 50) 00:07:34.712 16736.886 - 16837.711: 96.7405% ( 37) 00:07:34.712 16837.711 - 16938.535: 97.0095% ( 26) 00:07:34.712 16938.535 - 17039.360: 97.2993% ( 28) 00:07:34.712 17039.360 - 17140.185: 97.5683% ( 26) 00:07:34.712 17140.185 - 17241.009: 97.7752% ( 20) 00:07:34.712 17241.009 - 17341.834: 97.9719% ( 19) 00:07:34.712 17341.834 - 17442.658: 98.0546% ( 8) 00:07:34.712 17442.658 - 17543.483: 98.1581% ( 10) 00:07:34.712 17543.483 - 17644.308: 98.2512% ( 9) 00:07:34.712 17644.308 - 17745.132: 98.3444% ( 9) 00:07:34.712 17745.132 - 17845.957: 98.4996% ( 15) 00:07:34.712 17845.957 - 17946.782: 98.5410% ( 4) 00:07:34.712 17946.782 - 18047.606: 98.5927% ( 5) 00:07:34.712 18047.606 - 18148.431: 98.6445% ( 5) 00:07:34.712 18148.431 - 18249.255: 98.6755% ( 3) 00:07:34.712 26819.348 - 27020.997: 98.7479% ( 7) 00:07:34.712 27020.997 - 27222.646: 98.9135% ( 16) 00:07:34.712 27222.646 - 27424.295: 98.9756% ( 6) 00:07:34.712 27424.295 - 27625.945: 99.0170% ( 4) 00:07:34.712 27625.945 - 27827.594: 99.0894% ( 7) 00:07:34.712 27827.594 - 28029.243: 99.1722% ( 8) 00:07:34.712 28029.243 - 28230.892: 99.2343% ( 6) 00:07:34.712 28230.892 - 28432.542: 99.3274% ( 9) 00:07:34.712 28432.542 - 28634.191: 99.3377% ( 1) 00:07:34.712 33473.772 - 33675.422: 99.3584% ( 2) 00:07:34.712 34078.720 - 34280.369: 99.3998% ( 4) 00:07:34.712 34280.369 - 34482.018: 99.4619% ( 6) 00:07:34.712 34482.018 - 34683.668: 99.5137% ( 5) 00:07:34.712 34683.668 - 34885.317: 99.5654% ( 5) 00:07:34.712 34885.317 - 35086.966: 99.6378% ( 7) 00:07:34.712 35086.966 - 35288.615: 99.7103% ( 7) 00:07:34.712 35288.615 - 35490.265: 99.7724% ( 6) 00:07:34.712 35490.265 - 35691.914: 99.8551% ( 8) 00:07:34.712 35691.914 - 35893.563: 99.9276% ( 7) 00:07:34.712 35893.563 - 36095.212: 99.9897% ( 6) 00:07:34.712 36095.212 - 36296.862: 100.0000% ( 1) 00:07:34.712 00:07:34.712 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.712 ============================================================================== 00:07:34.712 Range in us Cumulative IO count 00:07:34.712 9578.338 - 9628.751: 0.0621% ( 6) 00:07:34.712 9628.751 - 9679.163: 0.1035% ( 4) 00:07:34.712 9679.163 - 9729.575: 0.1759% ( 7) 00:07:34.712 9729.575 - 9779.988: 0.3311% ( 15) 00:07:34.712 9779.988 - 9830.400: 0.4760% ( 14) 00:07:34.712 9830.400 - 9880.812: 0.5277% ( 5) 00:07:34.712 9880.812 - 9931.225: 0.5795% ( 5) 00:07:34.712 9931.225 - 9981.637: 0.6105% ( 3) 00:07:34.712 9981.637 - 10032.049: 0.6416% ( 3) 00:07:34.712 10032.049 - 10082.462: 0.6623% ( 2) 00:07:34.712 10132.874 - 10183.286: 0.7243% ( 6) 00:07:34.712 10183.286 - 10233.698: 0.8071% ( 8) 00:07:34.712 10233.698 - 10284.111: 0.8796% ( 7) 00:07:34.712 10284.111 - 10334.523: 1.0141% ( 13) 00:07:34.712 10334.523 - 10384.935: 1.3866% ( 36) 00:07:34.712 10384.935 - 10435.348: 1.6453% ( 25) 00:07:34.712 10435.348 - 10485.760: 2.0075% ( 35) 00:07:34.712 10485.760 - 10536.172: 2.3282% ( 31) 00:07:34.712 10536.172 - 10586.585: 2.9387% ( 59) 00:07:34.712 10586.585 - 10636.997: 3.5493% ( 59) 00:07:34.712 10636.997 - 10687.409: 4.2943% ( 72) 00:07:34.712 10687.409 - 10737.822: 5.2980% ( 97) 00:07:34.712 10737.822 - 10788.234: 6.0948% ( 77) 00:07:34.712 10788.234 - 10838.646: 6.8812% ( 76) 00:07:34.712 10838.646 - 10889.058: 8.1643% ( 124) 00:07:34.712 10889.058 - 10939.471: 9.3957% ( 119) 00:07:34.712 10939.471 - 10989.883: 10.6788% ( 124) 00:07:34.712 10989.883 - 11040.295: 11.7032% ( 99) 00:07:34.712 11040.295 - 11090.708: 12.8208% ( 108) 00:07:34.712 11090.708 - 11141.120: 13.8349% ( 98) 00:07:34.712 11141.120 - 11191.532: 15.0352% ( 116) 00:07:34.712 11191.532 - 11241.945: 15.9768% ( 91) 00:07:34.712 11241.945 - 11292.357: 16.7632% ( 76) 00:07:34.712 11292.357 - 11342.769: 17.4876% ( 70) 00:07:34.712 11342.769 - 11393.182: 18.1084% ( 60) 00:07:34.712 11393.182 - 11443.594: 18.8224% ( 69) 00:07:34.712 11443.594 - 11494.006: 19.5157% ( 67) 00:07:34.712 11494.006 - 11544.418: 20.5091% ( 96) 00:07:34.712 11544.418 - 11594.831: 21.4300% ( 89) 00:07:34.712 11594.831 - 11645.243: 22.1751% ( 72) 00:07:34.712 11645.243 - 11695.655: 22.9719% ( 77) 00:07:34.712 11695.655 - 11746.068: 23.8411% ( 84) 00:07:34.712 11746.068 - 11796.480: 24.6896% ( 82) 00:07:34.712 11796.480 - 11846.892: 25.6209% ( 90) 00:07:34.712 11846.892 - 11897.305: 26.7695% ( 111) 00:07:34.712 11897.305 - 11947.717: 28.3009% ( 148) 00:07:34.712 11947.717 - 11998.129: 29.8013% ( 145) 00:07:34.712 11998.129 - 12048.542: 31.1983% ( 135) 00:07:34.712 12048.542 - 12098.954: 32.6055% ( 136) 00:07:34.712 12098.954 - 12149.366: 34.1784% ( 152) 00:07:34.712 12149.366 - 12199.778: 35.4822% ( 126) 00:07:34.712 12199.778 - 12250.191: 36.5170% ( 100) 00:07:34.712 12250.191 - 12300.603: 37.4897% ( 94) 00:07:34.712 12300.603 - 12351.015: 38.7210% ( 119) 00:07:34.712 12351.015 - 12401.428: 39.9110% ( 115) 00:07:34.712 12401.428 - 12451.840: 40.9354% ( 99) 00:07:34.712 12451.840 - 12502.252: 41.9288% ( 96) 00:07:34.713 12502.252 - 12552.665: 43.1188% ( 115) 00:07:34.713 12552.665 - 12603.077: 44.2674% ( 111) 00:07:34.713 12603.077 - 12653.489: 45.1573% ( 86) 00:07:34.713 12653.489 - 12703.902: 46.1093% ( 92) 00:07:34.713 12703.902 - 12754.314: 47.1958% ( 105) 00:07:34.713 12754.314 - 12804.726: 48.2719% ( 104) 00:07:34.713 12804.726 - 12855.138: 49.2239% ( 92) 00:07:34.713 12855.138 - 12905.551: 49.9172% ( 67) 00:07:34.713 12905.551 - 13006.375: 51.5004% ( 153) 00:07:34.713 13006.375 - 13107.200: 52.9180% ( 137) 00:07:34.713 13107.200 - 13208.025: 54.4392% ( 147) 00:07:34.713 13208.025 - 13308.849: 56.6018% ( 209) 00:07:34.713 13308.849 - 13409.674: 59.2508% ( 256) 00:07:34.713 13409.674 - 13510.498: 61.5894% ( 226) 00:07:34.713 13510.498 - 13611.323: 63.9797% ( 231) 00:07:34.713 13611.323 - 13712.148: 65.9354% ( 189) 00:07:34.713 13712.148 - 13812.972: 67.8601% ( 186) 00:07:34.713 13812.972 - 13913.797: 69.7848% ( 186) 00:07:34.713 13913.797 - 14014.622: 72.0820% ( 222) 00:07:34.713 14014.622 - 14115.446: 73.9652% ( 182) 00:07:34.713 14115.446 - 14216.271: 75.3725% ( 136) 00:07:34.713 14216.271 - 14317.095: 76.8522% ( 143) 00:07:34.713 14317.095 - 14417.920: 78.2285% ( 133) 00:07:34.713 14417.920 - 14518.745: 79.6565% ( 138) 00:07:34.713 14518.745 - 14619.569: 80.9396% ( 124) 00:07:34.713 14619.569 - 14720.394: 82.0985% ( 112) 00:07:34.713 14720.394 - 14821.218: 83.6817% ( 153) 00:07:34.713 14821.218 - 14922.043: 84.6958% ( 98) 00:07:34.713 14922.043 - 15022.868: 85.5132% ( 79) 00:07:34.713 15022.868 - 15123.692: 86.2169% ( 68) 00:07:34.713 15123.692 - 15224.517: 86.8895% ( 65) 00:07:34.713 15224.517 - 15325.342: 87.6345% ( 72) 00:07:34.713 15325.342 - 15426.166: 88.4106% ( 75) 00:07:34.713 15426.166 - 15526.991: 89.2798% ( 84) 00:07:34.713 15526.991 - 15627.815: 90.0869% ( 78) 00:07:34.713 15627.815 - 15728.640: 90.8940% ( 78) 00:07:34.713 15728.640 - 15829.465: 91.5046% ( 59) 00:07:34.713 15829.465 - 15930.289: 92.1565% ( 63) 00:07:34.713 15930.289 - 16031.114: 92.8704% ( 69) 00:07:34.713 16031.114 - 16131.938: 93.5637% ( 67) 00:07:34.713 16131.938 - 16232.763: 94.1639% ( 58) 00:07:34.713 16232.763 - 16333.588: 94.7641% ( 58) 00:07:34.713 16333.588 - 16434.412: 95.1987% ( 42) 00:07:34.713 16434.412 - 16535.237: 95.6126% ( 40) 00:07:34.713 16535.237 - 16636.062: 96.0161% ( 39) 00:07:34.713 16636.062 - 16736.886: 96.3887% ( 36) 00:07:34.713 16736.886 - 16837.711: 96.6784% ( 28) 00:07:34.713 16837.711 - 16938.535: 96.9888% ( 30) 00:07:34.713 16938.535 - 17039.360: 97.3820% ( 38) 00:07:34.713 17039.360 - 17140.185: 97.6511% ( 26) 00:07:34.713 17140.185 - 17241.009: 97.8063% ( 15) 00:07:34.713 17241.009 - 17341.834: 97.9201% ( 11) 00:07:34.713 17341.834 - 17442.658: 97.9822% ( 6) 00:07:34.713 17442.658 - 17543.483: 98.0132% ( 3) 00:07:34.713 17543.483 - 17644.308: 98.0339% ( 2) 00:07:34.713 17644.308 - 17745.132: 98.1167% ( 8) 00:07:34.713 17745.132 - 17845.957: 98.1995% ( 8) 00:07:34.713 17845.957 - 17946.782: 98.2616% ( 6) 00:07:34.713 17946.782 - 18047.606: 98.3133% ( 5) 00:07:34.713 18047.606 - 18148.431: 98.3858% ( 7) 00:07:34.713 18148.431 - 18249.255: 98.4478% ( 6) 00:07:34.713 18249.255 - 18350.080: 98.5203% ( 7) 00:07:34.713 18350.080 - 18450.905: 98.5824% ( 6) 00:07:34.713 18450.905 - 18551.729: 98.6445% ( 6) 00:07:34.713 18551.729 - 18652.554: 98.6755% ( 3) 00:07:34.713 25306.978 - 25407.803: 98.6858% ( 1) 00:07:34.713 25407.803 - 25508.628: 98.7169% ( 3) 00:07:34.713 25508.628 - 25609.452: 98.7583% ( 4) 00:07:34.713 25609.452 - 25710.277: 98.7997% ( 4) 00:07:34.713 25710.277 - 25811.102: 98.8411% ( 4) 00:07:34.713 25811.102 - 26012.751: 98.9342% ( 9) 00:07:34.713 26012.751 - 26214.400: 99.0170% ( 8) 00:07:34.713 26214.400 - 26416.049: 99.0998% ( 8) 00:07:34.713 26416.049 - 26617.698: 99.1929% ( 9) 00:07:34.713 26617.698 - 26819.348: 99.2757% ( 8) 00:07:34.713 26819.348 - 27020.997: 99.3377% ( 6) 00:07:34.713 32868.825 - 33070.474: 99.4102% ( 7) 00:07:34.713 33070.474 - 33272.123: 99.4930% ( 8) 00:07:34.713 33272.123 - 33473.772: 99.5757% ( 8) 00:07:34.713 33473.772 - 33675.422: 99.6689% ( 9) 00:07:34.713 33675.422 - 33877.071: 99.7517% ( 8) 00:07:34.713 33877.071 - 34078.720: 99.8448% ( 9) 00:07:34.713 34078.720 - 34280.369: 99.9276% ( 8) 00:07:34.713 34280.369 - 34482.018: 100.0000% ( 7) 00:07:34.713 00:07:34.713 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.713 ============================================================================== 00:07:34.713 Range in us Cumulative IO count 00:07:34.713 9275.865 - 9326.277: 0.0207% ( 2) 00:07:34.713 9326.277 - 9376.689: 0.0724% ( 5) 00:07:34.713 9376.689 - 9427.102: 0.1242% ( 5) 00:07:34.713 9427.102 - 9477.514: 0.1656% ( 4) 00:07:34.713 9477.514 - 9527.926: 0.2070% ( 4) 00:07:34.713 9527.926 - 9578.338: 0.3725% ( 16) 00:07:34.713 9578.338 - 9628.751: 0.4450% ( 7) 00:07:34.713 9628.751 - 9679.163: 0.4863% ( 4) 00:07:34.713 9679.163 - 9729.575: 0.5277% ( 4) 00:07:34.713 9729.575 - 9779.988: 0.5691% ( 4) 00:07:34.713 9779.988 - 9830.400: 0.6002% ( 3) 00:07:34.713 9830.400 - 9880.812: 0.6829% ( 8) 00:07:34.713 9880.812 - 9931.225: 0.7450% ( 6) 00:07:34.713 9931.225 - 9981.637: 0.9002% ( 15) 00:07:34.713 9981.637 - 10032.049: 1.1072% ( 20) 00:07:34.713 10032.049 - 10082.462: 1.2521% ( 14) 00:07:34.713 10082.462 - 10132.874: 1.5004% ( 24) 00:07:34.713 10132.874 - 10183.286: 1.7901% ( 28) 00:07:34.713 10183.286 - 10233.698: 1.9557% ( 16) 00:07:34.713 10233.698 - 10284.111: 2.1109% ( 15) 00:07:34.713 10284.111 - 10334.523: 2.2868% ( 17) 00:07:34.713 10334.523 - 10384.935: 2.5766% ( 28) 00:07:34.713 10384.935 - 10435.348: 3.0008% ( 41) 00:07:34.713 10435.348 - 10485.760: 3.4561% ( 44) 00:07:34.713 10485.760 - 10536.172: 4.1908% ( 71) 00:07:34.713 10536.172 - 10586.585: 5.0083% ( 79) 00:07:34.713 10586.585 - 10636.997: 5.9603% ( 92) 00:07:34.713 10636.997 - 10687.409: 6.8916% ( 90) 00:07:34.713 10687.409 - 10737.822: 7.8435% ( 92) 00:07:34.713 10737.822 - 10788.234: 9.0025% ( 112) 00:07:34.713 10788.234 - 10838.646: 9.9959% ( 96) 00:07:34.713 10838.646 - 10889.058: 10.8340% ( 81) 00:07:34.713 10889.058 - 10939.471: 11.5687% ( 71) 00:07:34.713 10939.471 - 10989.883: 12.3034% ( 71) 00:07:34.713 10989.883 - 11040.295: 13.1105% ( 78) 00:07:34.713 11040.295 - 11090.708: 13.9694% ( 83) 00:07:34.713 11090.708 - 11141.120: 14.6213% ( 63) 00:07:34.713 11141.120 - 11191.532: 15.2628% ( 62) 00:07:34.714 11191.532 - 11241.945: 15.8733% ( 59) 00:07:34.714 11241.945 - 11292.357: 16.6080% ( 71) 00:07:34.714 11292.357 - 11342.769: 17.4048% ( 77) 00:07:34.714 11342.769 - 11393.182: 18.1912% ( 76) 00:07:34.714 11393.182 - 11443.594: 18.9052% ( 69) 00:07:34.714 11443.594 - 11494.006: 19.7537% ( 82) 00:07:34.714 11494.006 - 11544.418: 20.9334% ( 114) 00:07:34.714 11544.418 - 11594.831: 22.0820% ( 111) 00:07:34.714 11594.831 - 11645.243: 23.1478% ( 103) 00:07:34.714 11645.243 - 11695.655: 24.5447% ( 135) 00:07:34.714 11695.655 - 11746.068: 25.4967% ( 92) 00:07:34.714 11746.068 - 11796.480: 26.3969% ( 87) 00:07:34.714 11796.480 - 11846.892: 27.4214% ( 99) 00:07:34.714 11846.892 - 11897.305: 28.3733% ( 92) 00:07:34.714 11897.305 - 11947.717: 29.3253% ( 92) 00:07:34.714 11947.717 - 11998.129: 30.2463% ( 89) 00:07:34.714 11998.129 - 12048.542: 31.1983% ( 92) 00:07:34.714 12048.542 - 12098.954: 31.9640% ( 74) 00:07:34.714 12098.954 - 12149.366: 32.9574% ( 96) 00:07:34.714 12149.366 - 12199.778: 33.9300% ( 94) 00:07:34.714 12199.778 - 12250.191: 34.8820% ( 92) 00:07:34.714 12250.191 - 12300.603: 36.0513% ( 113) 00:07:34.714 12300.603 - 12351.015: 37.1171% ( 103) 00:07:34.714 12351.015 - 12401.428: 38.1312% ( 98) 00:07:34.714 12401.428 - 12451.840: 39.1867% ( 102) 00:07:34.714 12451.840 - 12502.252: 40.1904% ( 97) 00:07:34.714 12502.252 - 12552.665: 41.1734% ( 95) 00:07:34.714 12552.665 - 12603.077: 42.3013% ( 109) 00:07:34.714 12603.077 - 12653.489: 43.2844% ( 95) 00:07:34.714 12653.489 - 12703.902: 44.3088% ( 99) 00:07:34.714 12703.902 - 12754.314: 45.3332% ( 99) 00:07:34.714 12754.314 - 12804.726: 46.4921% ( 112) 00:07:34.714 12804.726 - 12855.138: 47.5786% ( 105) 00:07:34.714 12855.138 - 12905.551: 48.6858% ( 107) 00:07:34.714 12905.551 - 13006.375: 51.2314% ( 246) 00:07:34.714 13006.375 - 13107.200: 53.4768% ( 217) 00:07:34.714 13107.200 - 13208.025: 55.5257% ( 198) 00:07:34.714 13208.025 - 13308.849: 57.2434% ( 166) 00:07:34.714 13308.849 - 13409.674: 58.9507% ( 165) 00:07:34.714 13409.674 - 13510.498: 60.7616% ( 175) 00:07:34.714 13510.498 - 13611.323: 62.6242% ( 180) 00:07:34.714 13611.323 - 13712.148: 64.5592% ( 187) 00:07:34.714 13712.148 - 13812.972: 66.3700% ( 175) 00:07:34.714 13812.972 - 13913.797: 68.2844% ( 185) 00:07:34.714 13913.797 - 14014.622: 70.4367% ( 208) 00:07:34.714 14014.622 - 14115.446: 72.3303% ( 183) 00:07:34.714 14115.446 - 14216.271: 74.0791% ( 169) 00:07:34.714 14216.271 - 14317.095: 75.6933% ( 156) 00:07:34.714 14317.095 - 14417.920: 77.3179% ( 157) 00:07:34.714 14417.920 - 14518.745: 78.9942% ( 162) 00:07:34.714 14518.745 - 14619.569: 80.1531% ( 112) 00:07:34.714 14619.569 - 14720.394: 81.1776% ( 99) 00:07:34.714 14720.394 - 14821.218: 82.2641% ( 105) 00:07:34.714 14821.218 - 14922.043: 83.2988% ( 100) 00:07:34.714 14922.043 - 15022.868: 84.8924% ( 154) 00:07:34.714 15022.868 - 15123.692: 86.5687% ( 162) 00:07:34.714 15123.692 - 15224.517: 87.6966% ( 109) 00:07:34.714 15224.517 - 15325.342: 88.7417% ( 101) 00:07:34.714 15325.342 - 15426.166: 89.7972% ( 102) 00:07:34.714 15426.166 - 15526.991: 90.9147% ( 108) 00:07:34.714 15526.991 - 15627.815: 91.8150% ( 87) 00:07:34.714 15627.815 - 15728.640: 92.6118% ( 77) 00:07:34.714 15728.640 - 15829.465: 93.2533% ( 62) 00:07:34.714 15829.465 - 15930.289: 93.6983% ( 43) 00:07:34.714 15930.289 - 16031.114: 94.1018% ( 39) 00:07:34.714 16031.114 - 16131.938: 94.5985% ( 48) 00:07:34.714 16131.938 - 16232.763: 95.0228% ( 41) 00:07:34.714 16232.763 - 16333.588: 95.3746% ( 34) 00:07:34.714 16333.588 - 16434.412: 95.6954% ( 31) 00:07:34.714 16434.412 - 16535.237: 96.0161% ( 31) 00:07:34.714 16535.237 - 16636.062: 96.3059% ( 28) 00:07:34.714 16636.062 - 16736.886: 96.5646% ( 25) 00:07:34.714 16736.886 - 16837.711: 96.8026% ( 23) 00:07:34.714 16837.711 - 16938.535: 96.9992% ( 19) 00:07:34.714 16938.535 - 17039.360: 97.2372% ( 23) 00:07:34.714 17039.360 - 17140.185: 97.5166% ( 27) 00:07:34.714 17140.185 - 17241.009: 97.7028% ( 18) 00:07:34.714 17241.009 - 17341.834: 97.8684% ( 16) 00:07:34.714 17341.834 - 17442.658: 98.0753% ( 20) 00:07:34.714 17442.658 - 17543.483: 98.1788% ( 10) 00:07:34.714 17543.483 - 17644.308: 98.2409% ( 6) 00:07:34.714 17644.308 - 17745.132: 98.3030% ( 6) 00:07:34.714 17745.132 - 17845.957: 98.3651% ( 6) 00:07:34.714 17845.957 - 17946.782: 98.4272% ( 6) 00:07:34.714 17946.782 - 18047.606: 98.4892% ( 6) 00:07:34.714 18047.606 - 18148.431: 98.5617% ( 7) 00:07:34.714 18148.431 - 18249.255: 98.6238% ( 6) 00:07:34.714 18249.255 - 18350.080: 98.6755% ( 5) 00:07:34.714 24500.382 - 24601.206: 98.6858% ( 1) 00:07:34.714 24601.206 - 24702.031: 98.7169% ( 3) 00:07:34.714 24702.031 - 24802.855: 98.7376% ( 2) 00:07:34.714 24802.855 - 24903.680: 98.7686% ( 3) 00:07:34.714 24903.680 - 25004.505: 98.7997% ( 3) 00:07:34.714 25004.505 - 25105.329: 98.8204% ( 2) 00:07:34.714 25105.329 - 25206.154: 98.8514% ( 3) 00:07:34.714 25206.154 - 25306.978: 98.8825% ( 3) 00:07:34.714 25306.978 - 25407.803: 98.9135% ( 3) 00:07:34.714 25407.803 - 25508.628: 98.9342% ( 2) 00:07:34.714 25508.628 - 25609.452: 98.9652% ( 3) 00:07:34.714 25609.452 - 25710.277: 99.0066% ( 4) 00:07:34.714 25710.277 - 25811.102: 99.0480% ( 4) 00:07:34.714 25811.102 - 26012.751: 99.1308% ( 8) 00:07:34.714 26012.751 - 26214.400: 99.2136% ( 8) 00:07:34.714 26214.400 - 26416.049: 99.2964% ( 8) 00:07:34.714 26416.049 - 26617.698: 99.3377% ( 4) 00:07:34.714 32263.877 - 32465.526: 99.3481% ( 1) 00:07:34.714 32465.526 - 32667.175: 99.3895% ( 4) 00:07:34.714 32667.175 - 32868.825: 99.4619% ( 7) 00:07:34.714 32868.825 - 33070.474: 99.5447% ( 8) 00:07:34.714 33070.474 - 33272.123: 99.6275% ( 8) 00:07:34.714 33272.123 - 33473.772: 99.7103% ( 8) 00:07:34.714 33473.772 - 33675.422: 99.8034% ( 9) 00:07:34.714 33675.422 - 33877.071: 99.8862% ( 8) 00:07:34.714 33877.071 - 34078.720: 99.9793% ( 9) 00:07:34.714 34078.720 - 34280.369: 100.0000% ( 2) 00:07:34.714 00:07:34.714 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.714 ============================================================================== 00:07:34.714 Range in us Cumulative IO count 00:07:34.714 10032.049 - 10082.462: 0.0207% ( 2) 00:07:34.714 10082.462 - 10132.874: 0.1449% ( 12) 00:07:34.714 10132.874 - 10183.286: 0.3829% ( 23) 00:07:34.714 10183.286 - 10233.698: 0.6209% ( 23) 00:07:34.714 10233.698 - 10284.111: 0.9416% ( 31) 00:07:34.714 10284.111 - 10334.523: 1.2624% ( 31) 00:07:34.714 10334.523 - 10384.935: 1.8315% ( 55) 00:07:34.714 10384.935 - 10435.348: 2.2765% ( 43) 00:07:34.714 10435.348 - 10485.760: 3.0112% ( 71) 00:07:34.714 10485.760 - 10536.172: 3.6217% ( 59) 00:07:34.714 10536.172 - 10586.585: 4.2322% ( 59) 00:07:34.714 10586.585 - 10636.997: 4.9151% ( 66) 00:07:34.714 10636.997 - 10687.409: 5.6912% ( 75) 00:07:34.714 10687.409 - 10737.822: 6.5087% ( 79) 00:07:34.714 10737.822 - 10788.234: 7.6469% ( 110) 00:07:34.715 10788.234 - 10838.646: 8.4023% ( 73) 00:07:34.715 10838.646 - 10889.058: 9.3026% ( 87) 00:07:34.715 10889.058 - 10939.471: 10.5960% ( 125) 00:07:34.715 10939.471 - 10989.883: 11.4445% ( 82) 00:07:34.715 10989.883 - 11040.295: 12.0240% ( 56) 00:07:34.715 11040.295 - 11090.708: 12.5724% ( 53) 00:07:34.715 11090.708 - 11141.120: 13.1623% ( 57) 00:07:34.715 11141.120 - 11191.532: 14.0728% ( 88) 00:07:34.715 11191.532 - 11241.945: 14.5799% ( 49) 00:07:34.715 11241.945 - 11292.357: 15.1800% ( 58) 00:07:34.715 11292.357 - 11342.769: 16.0493% ( 84) 00:07:34.715 11342.769 - 11393.182: 16.8357% ( 76) 00:07:34.715 11393.182 - 11443.594: 17.7359% ( 87) 00:07:34.715 11443.594 - 11494.006: 18.5844% ( 82) 00:07:34.715 11494.006 - 11544.418: 19.5468% ( 93) 00:07:34.715 11544.418 - 11594.831: 20.2711% ( 70) 00:07:34.715 11594.831 - 11645.243: 20.9851% ( 69) 00:07:34.715 11645.243 - 11695.655: 21.9888% ( 97) 00:07:34.715 11695.655 - 11746.068: 23.1478% ( 112) 00:07:34.715 11746.068 - 11796.480: 24.4205% ( 123) 00:07:34.715 11796.480 - 11846.892: 25.7864% ( 132) 00:07:34.715 11846.892 - 11897.305: 27.1937% ( 136) 00:07:34.715 11897.305 - 11947.717: 28.4872% ( 125) 00:07:34.715 11947.717 - 11998.129: 30.1738% ( 163) 00:07:34.715 11998.129 - 12048.542: 31.4466% ( 123) 00:07:34.715 12048.542 - 12098.954: 32.6469% ( 116) 00:07:34.715 12098.954 - 12149.366: 33.5265% ( 85) 00:07:34.715 12149.366 - 12199.778: 34.5095% ( 95) 00:07:34.715 12199.778 - 12250.191: 35.4512% ( 91) 00:07:34.715 12250.191 - 12300.603: 36.3307% ( 85) 00:07:34.715 12300.603 - 12351.015: 37.1999% ( 84) 00:07:34.715 12351.015 - 12401.428: 37.9450% ( 72) 00:07:34.715 12401.428 - 12451.840: 38.8969% ( 92) 00:07:34.715 12451.840 - 12502.252: 39.7351% ( 81) 00:07:34.715 12502.252 - 12552.665: 40.5422% ( 78) 00:07:34.715 12552.665 - 12603.077: 41.6805% ( 110) 00:07:34.715 12603.077 - 12653.489: 43.0257% ( 130) 00:07:34.715 12653.489 - 12703.902: 44.0708% ( 101) 00:07:34.715 12703.902 - 12754.314: 45.1159% ( 101) 00:07:34.715 12754.314 - 12804.726: 46.1921% ( 104) 00:07:34.715 12804.726 - 12855.138: 47.4027% ( 117) 00:07:34.715 12855.138 - 12905.551: 48.5927% ( 115) 00:07:34.715 12905.551 - 13006.375: 51.3452% ( 266) 00:07:34.715 13006.375 - 13107.200: 53.9632% ( 253) 00:07:34.715 13107.200 - 13208.025: 56.6639% ( 261) 00:07:34.715 13208.025 - 13308.849: 59.4267% ( 267) 00:07:34.715 13308.849 - 13409.674: 61.4756% ( 198) 00:07:34.715 13409.674 - 13510.498: 63.8762% ( 232) 00:07:34.715 13510.498 - 13611.323: 65.6147% ( 168) 00:07:34.715 13611.323 - 13712.148: 67.0737% ( 141) 00:07:34.715 13712.148 - 13812.972: 68.5327% ( 141) 00:07:34.715 13812.972 - 13913.797: 70.1159% ( 153) 00:07:34.715 13913.797 - 14014.622: 72.0302% ( 185) 00:07:34.715 14014.622 - 14115.446: 73.9031% ( 181) 00:07:34.715 14115.446 - 14216.271: 75.5484% ( 159) 00:07:34.715 14216.271 - 14317.095: 76.9661% ( 137) 00:07:34.715 14317.095 - 14417.920: 78.3630% ( 135) 00:07:34.715 14417.920 - 14518.745: 79.5012% ( 110) 00:07:34.715 14518.745 - 14619.569: 80.7430% ( 120) 00:07:34.715 14619.569 - 14720.394: 81.9640% ( 118) 00:07:34.715 14720.394 - 14821.218: 83.0608% ( 106) 00:07:34.715 14821.218 - 14922.043: 84.1577% ( 106) 00:07:34.715 14922.043 - 15022.868: 84.9959% ( 81) 00:07:34.715 15022.868 - 15123.692: 86.1031% ( 107) 00:07:34.715 15123.692 - 15224.517: 87.2517% ( 111) 00:07:34.715 15224.517 - 15325.342: 88.2761% ( 99) 00:07:34.715 15325.342 - 15426.166: 89.2074% ( 90) 00:07:34.715 15426.166 - 15526.991: 89.9834% ( 75) 00:07:34.715 15526.991 - 15627.815: 90.5733% ( 57) 00:07:34.715 15627.815 - 15728.640: 91.4218% ( 82) 00:07:34.715 15728.640 - 15829.465: 92.1047% ( 66) 00:07:34.715 15829.465 - 15930.289: 92.7463% ( 62) 00:07:34.715 15930.289 - 16031.114: 93.4499% ( 68) 00:07:34.715 16031.114 - 16131.938: 94.2363% ( 76) 00:07:34.715 16131.938 - 16232.763: 95.2711% ( 100) 00:07:34.715 16232.763 - 16333.588: 95.8195% ( 53) 00:07:34.715 16333.588 - 16434.412: 96.2645% ( 43) 00:07:34.715 16434.412 - 16535.237: 96.5439% ( 27) 00:07:34.715 16535.237 - 16636.062: 96.8543% ( 30) 00:07:34.715 16636.062 - 16736.886: 97.1337% ( 27) 00:07:34.715 16736.886 - 16837.711: 97.3717% ( 23) 00:07:34.715 16837.711 - 16938.535: 97.5890% ( 21) 00:07:34.715 16938.535 - 17039.360: 97.7752% ( 18) 00:07:34.715 17039.360 - 17140.185: 97.9305% ( 15) 00:07:34.715 17140.185 - 17241.009: 98.0443% ( 11) 00:07:34.715 17241.009 - 17341.834: 98.1788% ( 13) 00:07:34.715 17341.834 - 17442.658: 98.3030% ( 12) 00:07:34.715 17442.658 - 17543.483: 98.4272% ( 12) 00:07:34.715 17543.483 - 17644.308: 98.5513% ( 12) 00:07:34.715 17644.308 - 17745.132: 98.6651% ( 11) 00:07:34.715 17745.132 - 17845.957: 98.6755% ( 1) 00:07:34.715 23088.837 - 23189.662: 98.6962% ( 2) 00:07:34.715 23189.662 - 23290.486: 98.7479% ( 5) 00:07:34.715 23290.486 - 23391.311: 98.7893% ( 4) 00:07:34.715 23391.311 - 23492.135: 98.8307% ( 4) 00:07:34.715 23492.135 - 23592.960: 98.8721% ( 4) 00:07:34.715 23592.960 - 23693.785: 98.9135% ( 4) 00:07:34.715 23693.785 - 23794.609: 98.9549% ( 4) 00:07:34.715 23794.609 - 23895.434: 98.9963% ( 4) 00:07:34.715 23895.434 - 23996.258: 99.0377% ( 4) 00:07:34.715 23996.258 - 24097.083: 99.0791% ( 4) 00:07:34.715 24097.083 - 24197.908: 99.1204% ( 4) 00:07:34.715 24197.908 - 24298.732: 99.1618% ( 4) 00:07:34.715 24298.732 - 24399.557: 99.2032% ( 4) 00:07:34.715 24399.557 - 24500.382: 99.2446% ( 4) 00:07:34.715 24500.382 - 24601.206: 99.2964% ( 5) 00:07:34.715 24601.206 - 24702.031: 99.3377% ( 4) 00:07:34.715 31255.631 - 31457.280: 99.3481% ( 1) 00:07:34.715 31457.280 - 31658.929: 99.4309% ( 8) 00:07:34.715 31658.929 - 31860.578: 99.5137% ( 8) 00:07:34.715 31860.578 - 32062.228: 99.6068% ( 9) 00:07:34.715 32062.228 - 32263.877: 99.6896% ( 8) 00:07:34.715 32263.877 - 32465.526: 99.7724% ( 8) 00:07:34.715 32465.526 - 32667.175: 99.8655% ( 9) 00:07:34.715 32667.175 - 32868.825: 99.9483% ( 8) 00:07:34.715 32868.825 - 33070.474: 100.0000% ( 5) 00:07:34.715 00:07:34.715 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.715 ============================================================================== 00:07:34.715 Range in us Cumulative IO count 00:07:34.715 9477.514 - 9527.926: 0.0103% ( 1) 00:07:34.715 9578.338 - 9628.751: 0.0207% ( 1) 00:07:34.715 9628.751 - 9679.163: 0.0621% ( 4) 00:07:34.715 9679.163 - 9729.575: 0.0931% ( 3) 00:07:34.715 9729.575 - 9779.988: 0.1242% ( 3) 00:07:34.715 9779.988 - 9830.400: 0.1966% ( 7) 00:07:34.715 9830.400 - 9880.812: 0.3104% ( 11) 00:07:34.715 9880.812 - 9931.225: 0.4863% ( 17) 00:07:34.715 9931.225 - 9981.637: 0.7036% ( 21) 00:07:34.715 9981.637 - 10032.049: 1.2003% ( 48) 00:07:34.715 10032.049 - 10082.462: 1.4901% ( 28) 00:07:34.715 10082.462 - 10132.874: 1.7798% ( 28) 00:07:34.715 10132.874 - 10183.286: 1.9661% ( 18) 00:07:34.716 10183.286 - 10233.698: 2.1834% ( 21) 00:07:34.716 10233.698 - 10284.111: 2.4627% ( 27) 00:07:34.716 10284.111 - 10334.523: 2.6283% ( 16) 00:07:34.716 10334.523 - 10384.935: 2.7525% ( 12) 00:07:34.716 10384.935 - 10435.348: 2.9284% ( 17) 00:07:34.716 10435.348 - 10485.760: 3.2078% ( 27) 00:07:34.716 10485.760 - 10536.172: 3.6527% ( 43) 00:07:34.716 10536.172 - 10586.585: 3.9632% ( 30) 00:07:34.716 10586.585 - 10636.997: 4.3667% ( 39) 00:07:34.716 10636.997 - 10687.409: 5.1221% ( 73) 00:07:34.716 10687.409 - 10737.822: 5.8154% ( 67) 00:07:34.716 10737.822 - 10788.234: 6.3121% ( 48) 00:07:34.716 10788.234 - 10838.646: 7.0157% ( 68) 00:07:34.716 10838.646 - 10889.058: 7.7297% ( 69) 00:07:34.716 10889.058 - 10939.471: 8.3195% ( 57) 00:07:34.716 10939.471 - 10989.883: 9.0542% ( 71) 00:07:34.716 10989.883 - 11040.295: 9.9027% ( 82) 00:07:34.716 11040.295 - 11090.708: 10.6788% ( 75) 00:07:34.716 11090.708 - 11141.120: 11.3928% ( 69) 00:07:34.716 11141.120 - 11191.532: 12.2620% ( 84) 00:07:34.716 11191.532 - 11241.945: 13.1416% ( 85) 00:07:34.716 11241.945 - 11292.357: 14.0625% ( 89) 00:07:34.716 11292.357 - 11342.769: 15.0145% ( 92) 00:07:34.716 11342.769 - 11393.182: 16.4011% ( 134) 00:07:34.716 11393.182 - 11443.594: 17.5290% ( 109) 00:07:34.716 11443.594 - 11494.006: 18.6879% ( 112) 00:07:34.716 11494.006 - 11544.418: 19.8262% ( 110) 00:07:34.716 11544.418 - 11594.831: 20.9023% ( 104) 00:07:34.716 11594.831 - 11645.243: 22.0613% ( 112) 00:07:34.716 11645.243 - 11695.655: 23.2616% ( 116) 00:07:34.716 11695.655 - 11746.068: 24.9690% ( 165) 00:07:34.716 11746.068 - 11796.480: 26.3142% ( 130) 00:07:34.716 11796.480 - 11846.892: 27.6387% ( 128) 00:07:34.716 11846.892 - 11897.305: 28.6838% ( 101) 00:07:34.716 11897.305 - 11947.717: 29.7392% ( 102) 00:07:34.716 11947.717 - 11998.129: 30.9913% ( 121) 00:07:34.716 11998.129 - 12048.542: 32.2123% ( 118) 00:07:34.716 12048.542 - 12098.954: 33.2161% ( 97) 00:07:34.716 12098.954 - 12149.366: 34.0853% ( 84) 00:07:34.716 12149.366 - 12199.778: 34.8096% ( 70) 00:07:34.716 12199.778 - 12250.191: 35.5857% ( 75) 00:07:34.716 12250.191 - 12300.603: 36.8171% ( 119) 00:07:34.716 12300.603 - 12351.015: 37.7587% ( 91) 00:07:34.716 12351.015 - 12401.428: 38.5762% ( 79) 00:07:34.716 12401.428 - 12451.840: 39.7041% ( 109) 00:07:34.716 12451.840 - 12502.252: 40.6974% ( 96) 00:07:34.716 12502.252 - 12552.665: 41.5149% ( 79) 00:07:34.716 12552.665 - 12603.077: 42.5083% ( 96) 00:07:34.716 12603.077 - 12653.489: 43.6776% ( 113) 00:07:34.716 12653.489 - 12703.902: 44.6296% ( 92) 00:07:34.716 12703.902 - 12754.314: 45.5712% ( 91) 00:07:34.716 12754.314 - 12804.726: 46.5956% ( 99) 00:07:34.716 12804.726 - 12855.138: 47.5579% ( 93) 00:07:34.716 12855.138 - 12905.551: 48.6858% ( 109) 00:07:34.716 12905.551 - 13006.375: 50.8175% ( 206) 00:07:34.716 13006.375 - 13107.200: 52.9387% ( 205) 00:07:34.716 13107.200 - 13208.025: 55.7637% ( 273) 00:07:34.716 13208.025 - 13308.849: 58.4127% ( 256) 00:07:34.716 13308.849 - 13409.674: 61.2169% ( 271) 00:07:34.716 13409.674 - 13510.498: 63.2968% ( 201) 00:07:34.716 13510.498 - 13611.323: 65.2835% ( 192) 00:07:34.716 13611.323 - 13712.148: 67.3117% ( 196) 00:07:34.716 13712.148 - 13812.972: 69.2363% ( 186) 00:07:34.716 13812.972 - 13913.797: 71.0265% ( 173) 00:07:34.716 13913.797 - 14014.622: 72.2993% ( 123) 00:07:34.716 14014.622 - 14115.446: 73.6962% ( 135) 00:07:34.716 14115.446 - 14216.271: 75.1759% ( 143) 00:07:34.716 14216.271 - 14317.095: 76.8108% ( 158) 00:07:34.716 14317.095 - 14417.920: 78.1353% ( 128) 00:07:34.716 14417.920 - 14518.745: 79.8013% ( 161) 00:07:34.716 14518.745 - 14619.569: 81.3328% ( 148) 00:07:34.716 14619.569 - 14720.394: 82.7918% ( 141) 00:07:34.716 14720.394 - 14821.218: 84.0542% ( 122) 00:07:34.716 14821.218 - 14922.043: 85.1511% ( 106) 00:07:34.716 14922.043 - 15022.868: 86.1858% ( 100) 00:07:34.716 15022.868 - 15123.692: 87.0240% ( 81) 00:07:34.716 15123.692 - 15224.517: 87.9760% ( 92) 00:07:34.716 15224.517 - 15325.342: 88.8452% ( 84) 00:07:34.716 15325.342 - 15426.166: 89.6213% ( 75) 00:07:34.716 15426.166 - 15526.991: 90.1904% ( 55) 00:07:34.716 15526.991 - 15627.815: 90.8113% ( 60) 00:07:34.716 15627.815 - 15728.640: 91.5666% ( 73) 00:07:34.716 15728.640 - 15829.465: 92.1875% ( 60) 00:07:34.716 15829.465 - 15930.289: 92.7152% ( 51) 00:07:34.716 15930.289 - 16031.114: 93.2533% ( 52) 00:07:34.716 16031.114 - 16131.938: 93.8949% ( 62) 00:07:34.716 16131.938 - 16232.763: 94.2881% ( 38) 00:07:34.716 16232.763 - 16333.588: 94.6606% ( 36) 00:07:34.716 16333.588 - 16434.412: 95.1987% ( 52) 00:07:34.716 16434.412 - 16535.237: 95.7471% ( 53) 00:07:34.716 16535.237 - 16636.062: 96.3473% ( 58) 00:07:34.716 16636.062 - 16736.886: 96.6991% ( 34) 00:07:34.716 16736.886 - 16837.711: 97.1233% ( 41) 00:07:34.716 16837.711 - 16938.535: 97.4648% ( 33) 00:07:34.716 16938.535 - 17039.360: 97.7752% ( 30) 00:07:34.716 17039.360 - 17140.185: 98.0132% ( 23) 00:07:34.716 17140.185 - 17241.009: 98.1892% ( 17) 00:07:34.716 17241.009 - 17341.834: 98.3340% ( 14) 00:07:34.716 17341.834 - 17442.658: 98.4892% ( 15) 00:07:34.716 17442.658 - 17543.483: 98.6238% ( 13) 00:07:34.716 17543.483 - 17644.308: 98.6755% ( 5) 00:07:34.716 21979.766 - 22080.591: 98.6858% ( 1) 00:07:34.716 22080.591 - 22181.415: 98.7169% ( 3) 00:07:34.716 22181.415 - 22282.240: 98.7686% ( 5) 00:07:34.716 22282.240 - 22383.065: 98.8100% ( 4) 00:07:34.716 22383.065 - 22483.889: 98.8514% ( 4) 00:07:34.716 22483.889 - 22584.714: 98.8928% ( 4) 00:07:34.716 22584.714 - 22685.538: 98.9342% ( 4) 00:07:34.716 22685.538 - 22786.363: 98.9756% ( 4) 00:07:34.716 22786.363 - 22887.188: 99.0273% ( 5) 00:07:34.716 22887.188 - 22988.012: 99.0687% ( 4) 00:07:34.716 22988.012 - 23088.837: 99.1101% ( 4) 00:07:34.716 23088.837 - 23189.662: 99.1515% ( 4) 00:07:34.716 23189.662 - 23290.486: 99.1929% ( 4) 00:07:34.716 23290.486 - 23391.311: 99.2343% ( 4) 00:07:34.716 23391.311 - 23492.135: 99.2757% ( 4) 00:07:34.716 23492.135 - 23592.960: 99.3274% ( 5) 00:07:34.716 23592.960 - 23693.785: 99.3377% ( 1) 00:07:34.716 30247.385 - 30449.034: 99.3584% ( 2) 00:07:34.716 30449.034 - 30650.683: 99.4412% ( 8) 00:07:34.716 30650.683 - 30852.332: 99.5240% ( 8) 00:07:34.716 30852.332 - 31053.982: 99.6068% ( 8) 00:07:34.716 31053.982 - 31255.631: 99.6896% ( 8) 00:07:34.716 31255.631 - 31457.280: 99.7827% ( 9) 00:07:34.716 31457.280 - 31658.929: 99.8655% ( 8) 00:07:34.716 31658.929 - 31860.578: 99.9483% ( 8) 00:07:34.716 31860.578 - 32062.228: 100.0000% ( 5) 00:07:34.716 00:07:34.716 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.716 ============================================================================== 00:07:34.716 Range in us Cumulative IO count 00:07:34.716 8922.978 - 8973.391: 0.0103% ( 1) 00:07:34.716 9023.803 - 9074.215: 0.0206% ( 1) 00:07:34.716 9074.215 - 9124.628: 0.0411% ( 2) 00:07:34.716 9124.628 - 9175.040: 0.0720% ( 3) 00:07:34.716 9175.040 - 9225.452: 0.0925% ( 2) 00:07:34.717 9225.452 - 9275.865: 0.1234% ( 3) 00:07:34.717 9275.865 - 9326.277: 0.1645% ( 4) 00:07:34.717 9326.277 - 9376.689: 0.1850% ( 2) 00:07:34.717 9376.689 - 9427.102: 0.2262% ( 4) 00:07:34.717 9427.102 - 9477.514: 0.2878% ( 6) 00:07:34.717 9477.514 - 9527.926: 0.4317% ( 14) 00:07:34.717 9527.926 - 9578.338: 0.5757% ( 14) 00:07:34.717 9578.338 - 9628.751: 0.6579% ( 8) 00:07:34.717 9628.751 - 9679.163: 0.7401% ( 8) 00:07:34.717 9679.163 - 9729.575: 0.8224% ( 8) 00:07:34.717 9729.575 - 9779.988: 0.9046% ( 8) 00:07:34.717 9779.988 - 9830.400: 1.1205% ( 21) 00:07:34.717 9830.400 - 9880.812: 1.2130% ( 9) 00:07:34.717 9880.812 - 9931.225: 1.3261% ( 11) 00:07:34.717 9931.225 - 9981.637: 1.4391% ( 11) 00:07:34.717 9981.637 - 10032.049: 1.6139% ( 17) 00:07:34.717 10032.049 - 10082.462: 1.7270% ( 11) 00:07:34.717 10082.462 - 10132.874: 1.8195% ( 9) 00:07:34.717 10132.874 - 10183.286: 1.9223% ( 10) 00:07:34.717 10183.286 - 10233.698: 2.0251% ( 10) 00:07:34.717 10233.698 - 10284.111: 2.0662% ( 4) 00:07:34.717 10284.111 - 10334.523: 2.1073% ( 4) 00:07:34.717 10334.523 - 10384.935: 2.1896% ( 8) 00:07:34.717 10384.935 - 10435.348: 2.2615% ( 7) 00:07:34.717 10435.348 - 10485.760: 2.4979% ( 23) 00:07:34.717 10485.760 - 10536.172: 2.6727% ( 17) 00:07:34.717 10536.172 - 10586.585: 3.0016% ( 32) 00:07:34.717 10586.585 - 10636.997: 3.4951% ( 48) 00:07:34.717 10636.997 - 10687.409: 4.2866% ( 77) 00:07:34.717 10687.409 - 10737.822: 5.0781% ( 77) 00:07:34.717 10737.822 - 10788.234: 5.9211% ( 82) 00:07:34.717 10788.234 - 10838.646: 6.8873% ( 94) 00:07:34.717 10838.646 - 10889.058: 7.8433% ( 93) 00:07:34.717 10889.058 - 10939.471: 8.5218% ( 66) 00:07:34.717 10939.471 - 10989.883: 9.2311% ( 69) 00:07:34.717 10989.883 - 11040.295: 9.8376% ( 59) 00:07:34.717 11040.295 - 11090.708: 10.8758% ( 101) 00:07:34.717 11090.708 - 11141.120: 11.6982% ( 80) 00:07:34.717 11141.120 - 11191.532: 12.6439% ( 92) 00:07:34.717 11191.532 - 11241.945: 13.4868% ( 82) 00:07:34.717 11241.945 - 11292.357: 14.4120% ( 90) 00:07:34.717 11292.357 - 11342.769: 15.6867% ( 124) 00:07:34.717 11342.769 - 11393.182: 16.8894% ( 117) 00:07:34.717 11393.182 - 11443.594: 18.1332% ( 121) 00:07:34.717 11443.594 - 11494.006: 19.4285% ( 126) 00:07:34.717 11494.006 - 11544.418: 20.8573% ( 139) 00:07:34.717 11544.418 - 11594.831: 21.9367% ( 105) 00:07:34.717 11594.831 - 11645.243: 22.9132% ( 95) 00:07:34.717 11645.243 - 11695.655: 24.2188% ( 127) 00:07:34.717 11695.655 - 11746.068: 25.2467% ( 100) 00:07:34.717 11746.068 - 11796.480: 26.3775% ( 110) 00:07:34.717 11796.480 - 11846.892: 27.5802% ( 117) 00:07:34.717 11846.892 - 11897.305: 28.6698% ( 106) 00:07:34.717 11897.305 - 11947.717: 29.6567% ( 96) 00:07:34.717 11947.717 - 11998.129: 30.5613% ( 88) 00:07:34.717 11998.129 - 12048.542: 31.7023% ( 111) 00:07:34.717 12048.542 - 12098.954: 32.5452% ( 82) 00:07:34.717 12098.954 - 12149.366: 33.1826% ( 62) 00:07:34.717 12149.366 - 12199.778: 34.0358% ( 83) 00:07:34.717 12199.778 - 12250.191: 34.8376% ( 78) 00:07:34.717 12250.191 - 12300.603: 35.7422% ( 88) 00:07:34.717 12300.603 - 12351.015: 36.5132% ( 75) 00:07:34.717 12351.015 - 12401.428: 37.2430% ( 71) 00:07:34.717 12401.428 - 12451.840: 38.3532% ( 108) 00:07:34.717 12451.840 - 12502.252: 39.1961% ( 82) 00:07:34.717 12502.252 - 12552.665: 39.8335% ( 62) 00:07:34.717 12552.665 - 12603.077: 40.9025% ( 104) 00:07:34.717 12603.077 - 12653.489: 42.1567% ( 122) 00:07:34.717 12653.489 - 12703.902: 43.4416% ( 125) 00:07:34.717 12703.902 - 12754.314: 44.6752% ( 120) 00:07:34.717 12754.314 - 12804.726: 45.7134% ( 101) 00:07:34.717 12804.726 - 12855.138: 46.7722% ( 103) 00:07:34.717 12855.138 - 12905.551: 47.8618% ( 106) 00:07:34.717 12905.551 - 13006.375: 49.8766% ( 196) 00:07:34.717 13006.375 - 13107.200: 52.3129% ( 237) 00:07:34.717 13107.200 - 13208.025: 55.0062% ( 262) 00:07:34.717 13208.025 - 13308.849: 57.8228% ( 274) 00:07:34.717 13308.849 - 13409.674: 60.8964% ( 299) 00:07:34.717 13409.674 - 13510.498: 63.7336% ( 276) 00:07:34.717 13510.498 - 13611.323: 66.4576% ( 265) 00:07:34.717 13611.323 - 13712.148: 68.4725% ( 196) 00:07:34.717 13712.148 - 13812.972: 70.5387% ( 201) 00:07:34.717 13812.972 - 13913.797: 72.9132% ( 231) 00:07:34.717 13913.797 - 14014.622: 74.7225% ( 176) 00:07:34.717 14014.622 - 14115.446: 76.0382% ( 128) 00:07:34.717 14115.446 - 14216.271: 77.0868% ( 102) 00:07:34.717 14216.271 - 14317.095: 77.9811% ( 87) 00:07:34.717 14317.095 - 14417.920: 79.1118% ( 110) 00:07:34.717 14417.920 - 14518.745: 80.4174% ( 127) 00:07:34.717 14518.745 - 14619.569: 81.6817% ( 123) 00:07:34.717 14619.569 - 14720.394: 82.8433% ( 113) 00:07:34.717 14720.394 - 14821.218: 83.9741% ( 110) 00:07:34.717 14821.218 - 14922.043: 84.9301% ( 93) 00:07:34.717 14922.043 - 15022.868: 85.9683% ( 101) 00:07:34.717 15022.868 - 15123.692: 87.1299% ( 113) 00:07:34.717 15123.692 - 15224.517: 87.8906% ( 74) 00:07:34.717 15224.517 - 15325.342: 88.6822% ( 77) 00:07:34.717 15325.342 - 15426.166: 89.3606% ( 66) 00:07:34.717 15426.166 - 15526.991: 90.0802% ( 70) 00:07:34.717 15526.991 - 15627.815: 90.8409% ( 74) 00:07:34.717 15627.815 - 15728.640: 91.3754% ( 52) 00:07:34.717 15728.640 - 15829.465: 91.8894% ( 50) 00:07:34.717 15829.465 - 15930.289: 92.5370% ( 63) 00:07:34.717 15930.289 - 16031.114: 93.1641% ( 61) 00:07:34.717 16031.114 - 16131.938: 93.9248% ( 74) 00:07:34.717 16131.938 - 16232.763: 94.8396% ( 89) 00:07:34.717 16232.763 - 16333.588: 95.4873% ( 63) 00:07:34.717 16333.588 - 16434.412: 96.1246% ( 62) 00:07:34.718 16434.412 - 16535.237: 96.6386% ( 50) 00:07:34.718 16535.237 - 16636.062: 96.9984% ( 35) 00:07:34.718 16636.062 - 16736.886: 97.2656% ( 26) 00:07:34.718 16736.886 - 16837.711: 97.5637% ( 29) 00:07:34.718 16837.711 - 16938.535: 97.8207% ( 25) 00:07:34.718 16938.535 - 17039.360: 98.1702% ( 34) 00:07:34.718 17039.360 - 17140.185: 98.5506% ( 37) 00:07:34.718 17140.185 - 17241.009: 98.9206% ( 36) 00:07:34.718 17241.009 - 17341.834: 99.0851% ( 16) 00:07:34.718 17341.834 - 17442.658: 99.1674% ( 8) 00:07:34.718 17442.658 - 17543.483: 99.1982% ( 3) 00:07:34.718 17543.483 - 17644.308: 99.2393% ( 4) 00:07:34.718 17644.308 - 17745.132: 99.2804% ( 4) 00:07:34.718 17745.132 - 17845.957: 99.3010% ( 2) 00:07:34.718 17845.957 - 17946.782: 99.3318% ( 3) 00:07:34.718 17946.782 - 18047.606: 99.3421% ( 1) 00:07:34.718 21677.292 - 21778.117: 99.3524% ( 1) 00:07:34.718 21778.117 - 21878.942: 99.3832% ( 3) 00:07:34.718 21878.942 - 21979.766: 99.4243% ( 4) 00:07:34.718 21979.766 - 22080.591: 99.4655% ( 4) 00:07:34.718 22080.591 - 22181.415: 99.5169% ( 5) 00:07:34.718 22181.415 - 22282.240: 99.5580% ( 4) 00:07:34.718 22282.240 - 22383.065: 99.5991% ( 4) 00:07:34.718 22383.065 - 22483.889: 99.6402% ( 4) 00:07:34.718 22483.889 - 22584.714: 99.6813% ( 4) 00:07:34.718 22584.714 - 22685.538: 99.7225% ( 4) 00:07:34.718 22685.538 - 22786.363: 99.7636% ( 4) 00:07:34.718 22786.363 - 22887.188: 99.8047% ( 4) 00:07:34.718 22887.188 - 22988.012: 99.8561% ( 5) 00:07:34.718 22988.012 - 23088.837: 99.8972% ( 4) 00:07:34.718 23088.837 - 23189.662: 99.9383% ( 4) 00:07:34.718 23189.662 - 23290.486: 99.9794% ( 4) 00:07:34.718 23290.486 - 23391.311: 100.0000% ( 2) 00:07:34.718 00:07:34.718 17:24:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:34.718 00:07:34.718 real 0m2.522s 00:07:34.718 user 0m2.189s 00:07:34.718 sys 0m0.214s 00:07:34.718 ************************************ 00:07:34.718 END TEST nvme_perf 00:07:34.718 ************************************ 00:07:34.718 17:24:07 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.718 17:24:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.718 17:24:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:34.718 17:24:07 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:34.718 17:24:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.718 17:24:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.718 ************************************ 00:07:34.718 START TEST nvme_hello_world 00:07:34.718 ************************************ 00:07:34.718 17:24:07 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:34.718 Initializing NVMe Controllers 00:07:34.718 Attached to 0000:00:10.0 00:07:34.718 Namespace ID: 1 size: 6GB 00:07:34.718 Attached to 0000:00:11.0 00:07:34.718 Namespace ID: 1 size: 5GB 00:07:34.718 Attached to 0000:00:13.0 00:07:34.718 Namespace ID: 1 size: 1GB 00:07:34.718 Attached to 0000:00:12.0 00:07:34.718 Namespace ID: 1 size: 4GB 00:07:34.718 Namespace ID: 2 size: 4GB 00:07:34.718 Namespace ID: 3 size: 4GB 00:07:34.718 Initialization complete. 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 INFO: using host memory buffer for IO 00:07:34.718 Hello world! 00:07:34.718 00:07:34.718 real 0m0.240s 00:07:34.718 user 0m0.081s 00:07:34.718 sys 0m0.102s 00:07:34.718 ************************************ 00:07:34.718 17:24:08 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.718 17:24:08 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:34.718 END TEST nvme_hello_world 00:07:34.718 ************************************ 00:07:34.979 17:24:08 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:34.979 17:24:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.979 17:24:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.979 17:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.979 ************************************ 00:07:34.979 START TEST nvme_sgl 00:07:34.979 ************************************ 00:07:34.979 17:24:08 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:34.979 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:34.979 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:34.979 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:34.979 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:34.979 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:34.979 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:34.979 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:34.979 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:34.979 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:35.286 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:35.286 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:35.286 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:35.286 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:35.286 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:35.286 NVMe Readv/Writev Request test 00:07:35.286 Attached to 0000:00:10.0 00:07:35.286 Attached to 0000:00:11.0 00:07:35.286 Attached to 0000:00:13.0 00:07:35.286 Attached to 0000:00:12.0 00:07:35.286 0000:00:10.0: build_io_request_2 test passed 00:07:35.286 0000:00:10.0: build_io_request_4 test passed 00:07:35.286 0000:00:10.0: build_io_request_5 test passed 00:07:35.286 0000:00:10.0: build_io_request_6 test passed 00:07:35.286 0000:00:10.0: build_io_request_7 test passed 00:07:35.286 0000:00:10.0: build_io_request_10 test passed 00:07:35.286 0000:00:11.0: build_io_request_2 test passed 00:07:35.286 0000:00:11.0: build_io_request_4 test passed 00:07:35.286 0000:00:11.0: build_io_request_5 test passed 00:07:35.286 0000:00:11.0: build_io_request_6 test passed 00:07:35.286 0000:00:11.0: build_io_request_7 test passed 00:07:35.286 0000:00:11.0: build_io_request_10 test passed 00:07:35.286 Cleaning up... 00:07:35.286 00:07:35.286 real 0m0.300s 00:07:35.286 user 0m0.149s 00:07:35.286 sys 0m0.107s 00:07:35.286 17:24:08 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.286 17:24:08 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:35.286 ************************************ 00:07:35.286 END TEST nvme_sgl 00:07:35.286 ************************************ 00:07:35.286 17:24:08 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:35.286 17:24:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.286 17:24:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.286 17:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.286 ************************************ 00:07:35.286 START TEST nvme_e2edp 00:07:35.286 ************************************ 00:07:35.286 17:24:08 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:35.574 NVMe Write/Read with End-to-End data protection test 00:07:35.574 Attached to 0000:00:10.0 00:07:35.574 Attached to 0000:00:11.0 00:07:35.574 Attached to 0000:00:13.0 00:07:35.574 Attached to 0000:00:12.0 00:07:35.574 Cleaning up... 00:07:35.574 00:07:35.574 real 0m0.203s 00:07:35.574 user 0m0.069s 00:07:35.574 sys 0m0.093s 00:07:35.574 17:24:08 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.574 ************************************ 00:07:35.574 END TEST nvme_e2edp 00:07:35.574 ************************************ 00:07:35.574 17:24:08 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:35.574 17:24:08 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:35.574 17:24:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.574 17:24:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.574 17:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.574 ************************************ 00:07:35.574 START TEST nvme_reserve 00:07:35.574 ************************************ 00:07:35.574 17:24:08 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:35.574 ===================================================== 00:07:35.574 NVMe Controller at PCI bus 0, device 16, function 0 00:07:35.574 ===================================================== 00:07:35.574 Reservations: Not Supported 00:07:35.574 ===================================================== 00:07:35.574 NVMe Controller at PCI bus 0, device 17, function 0 00:07:35.574 ===================================================== 00:07:35.574 Reservations: Not Supported 00:07:35.574 ===================================================== 00:07:35.574 NVMe Controller at PCI bus 0, device 19, function 0 00:07:35.574 ===================================================== 00:07:35.575 Reservations: Not Supported 00:07:35.575 ===================================================== 00:07:35.575 NVMe Controller at PCI bus 0, device 18, function 0 00:07:35.575 ===================================================== 00:07:35.575 Reservations: Not Supported 00:07:35.575 Reservation test passed 00:07:35.575 00:07:35.575 real 0m0.204s 00:07:35.575 user 0m0.071s 00:07:35.575 sys 0m0.095s 00:07:35.575 17:24:08 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.575 ************************************ 00:07:35.575 END TEST nvme_reserve 00:07:35.575 ************************************ 00:07:35.575 17:24:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:35.836 17:24:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:35.836 17:24:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.836 17:24:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.836 17:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.836 ************************************ 00:07:35.836 START TEST nvme_err_injection 00:07:35.836 ************************************ 00:07:35.836 17:24:08 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:35.836 NVMe Error Injection test 00:07:35.836 Attached to 0000:00:10.0 00:07:35.836 Attached to 0000:00:11.0 00:07:35.836 Attached to 0000:00:13.0 00:07:35.836 Attached to 0000:00:12.0 00:07:35.836 0000:00:10.0: get features failed as expected 00:07:35.836 0000:00:11.0: get features failed as expected 00:07:35.836 0000:00:13.0: get features failed as expected 00:07:35.836 0000:00:12.0: get features failed as expected 00:07:35.836 0000:00:10.0: get features successfully as expected 00:07:35.836 0000:00:11.0: get features successfully as expected 00:07:35.836 0000:00:13.0: get features successfully as expected 00:07:35.836 0000:00:12.0: get features successfully as expected 00:07:35.836 0000:00:12.0: read failed as expected 00:07:35.836 0000:00:10.0: read failed as expected 00:07:35.836 0000:00:11.0: read failed as expected 00:07:35.836 0000:00:13.0: read failed as expected 00:07:35.836 0000:00:12.0: read successfully as expected 00:07:35.836 0000:00:10.0: read successfully as expected 00:07:35.836 0000:00:11.0: read successfully as expected 00:07:35.836 0000:00:13.0: read successfully as expected 00:07:35.836 Cleaning up... 00:07:36.098 ************************************ 00:07:36.098 END TEST nvme_err_injection 00:07:36.098 ************************************ 00:07:36.098 00:07:36.098 real 0m0.242s 00:07:36.098 user 0m0.089s 00:07:36.098 sys 0m0.103s 00:07:36.098 17:24:09 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.098 17:24:09 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:36.098 17:24:09 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:36.098 17:24:09 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:36.098 17:24:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.098 17:24:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.098 ************************************ 00:07:36.098 START TEST nvme_overhead 00:07:36.098 ************************************ 00:07:36.098 17:24:09 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:37.479 Initializing NVMe Controllers 00:07:37.479 Attached to 0000:00:10.0 00:07:37.479 Attached to 0000:00:11.0 00:07:37.479 Attached to 0000:00:13.0 00:07:37.479 Attached to 0000:00:12.0 00:07:37.479 Initialization complete. Launching workers. 00:07:37.479 submit (in ns) avg, min, max = 12432.1, 10333.8, 109486.2 00:07:37.479 complete (in ns) avg, min, max = 8295.5, 7291.5, 349831.5 00:07:37.479 00:07:37.479 Submit histogram 00:07:37.479 ================ 00:07:37.479 Range in us Cumulative Count 00:07:37.479 10.289 - 10.338: 0.0260% ( 1) 00:07:37.479 10.388 - 10.437: 0.0520% ( 1) 00:07:37.479 10.437 - 10.486: 0.1041% ( 2) 00:07:37.479 10.486 - 10.535: 0.1561% ( 2) 00:07:37.479 10.535 - 10.585: 0.2082% ( 2) 00:07:37.479 10.634 - 10.683: 0.2862% ( 3) 00:07:37.479 10.683 - 10.732: 0.3383% ( 2) 00:07:37.479 10.782 - 10.831: 0.4163% ( 3) 00:07:37.479 10.831 - 10.880: 0.5204% ( 4) 00:07:37.479 10.880 - 10.929: 0.5725% ( 2) 00:07:37.479 10.929 - 10.978: 0.7546% ( 7) 00:07:37.479 10.978 - 11.028: 1.3271% ( 22) 00:07:37.479 11.028 - 11.077: 1.9776% ( 25) 00:07:37.479 11.077 - 11.126: 3.0705% ( 42) 00:07:37.479 11.126 - 11.175: 5.1522% ( 80) 00:07:37.479 11.175 - 11.225: 9.0554% ( 150) 00:07:37.479 11.225 - 11.274: 14.7021% ( 217) 00:07:37.479 11.274 - 11.323: 21.9100% ( 277) 00:07:37.479 11.323 - 11.372: 29.7424% ( 301) 00:07:37.479 11.372 - 11.422: 37.5228% ( 299) 00:07:37.479 11.422 - 11.471: 44.5485% ( 270) 00:07:37.479 11.471 - 11.520: 50.9758% ( 247) 00:07:37.479 11.520 - 11.569: 56.7265% ( 221) 00:07:37.479 11.569 - 11.618: 61.2022% ( 172) 00:07:37.479 11.618 - 11.668: 65.2875% ( 157) 00:07:37.479 11.668 - 11.717: 68.0978% ( 108) 00:07:37.479 11.717 - 11.766: 70.4398% ( 90) 00:07:37.479 11.766 - 11.815: 72.0010% ( 60) 00:07:37.479 11.815 - 11.865: 73.3281% ( 51) 00:07:37.479 11.865 - 11.914: 74.8374% ( 58) 00:07:37.479 11.914 - 11.963: 75.9042% ( 41) 00:07:37.479 11.963 - 12.012: 76.5808% ( 26) 00:07:37.479 12.012 - 12.062: 77.0752% ( 19) 00:07:37.479 12.062 - 12.111: 77.8038% ( 28) 00:07:37.479 12.111 - 12.160: 78.3242% ( 20) 00:07:37.479 12.160 - 12.209: 78.8447% ( 20) 00:07:37.479 12.209 - 12.258: 79.2090% ( 14) 00:07:37.479 12.258 - 12.308: 79.4431% ( 9) 00:07:37.479 12.308 - 12.357: 79.7554% ( 12) 00:07:37.479 12.357 - 12.406: 80.0937% ( 13) 00:07:37.479 12.406 - 12.455: 80.2758% ( 7) 00:07:37.479 12.455 - 12.505: 80.5360% ( 10) 00:07:37.480 12.505 - 12.554: 80.7442% ( 8) 00:07:37.480 12.554 - 12.603: 81.0825% ( 13) 00:07:37.480 12.603 - 12.702: 81.3427% ( 10) 00:07:37.480 12.702 - 12.800: 81.6289% ( 11) 00:07:37.480 12.800 - 12.898: 82.0193% ( 15) 00:07:37.480 12.898 - 12.997: 82.1233% ( 4) 00:07:37.480 12.997 - 13.095: 82.3315% ( 8) 00:07:37.480 13.095 - 13.194: 82.5397% ( 8) 00:07:37.480 13.194 - 13.292: 82.7739% ( 9) 00:07:37.480 13.292 - 13.391: 82.9560% ( 7) 00:07:37.480 13.391 - 13.489: 83.1382% ( 7) 00:07:37.480 13.489 - 13.588: 83.2683% ( 5) 00:07:37.480 13.588 - 13.686: 83.5285% ( 10) 00:07:37.480 13.686 - 13.785: 84.0489% ( 20) 00:07:37.480 13.785 - 13.883: 84.7775% ( 28) 00:07:37.480 13.883 - 13.982: 85.6362% ( 33) 00:07:37.480 13.982 - 14.080: 86.7291% ( 42) 00:07:37.480 14.080 - 14.178: 87.4577% ( 28) 00:07:37.480 14.178 - 14.277: 88.3164% ( 33) 00:07:37.480 14.277 - 14.375: 89.1231% ( 31) 00:07:37.480 14.375 - 14.474: 89.5134% ( 15) 00:07:37.480 14.474 - 14.572: 90.0338% ( 20) 00:07:37.480 14.572 - 14.671: 90.4762% ( 17) 00:07:37.480 14.671 - 14.769: 90.7104% ( 9) 00:07:37.480 14.769 - 14.868: 90.8405% ( 5) 00:07:37.480 14.868 - 14.966: 90.9446% ( 4) 00:07:37.480 14.966 - 15.065: 91.1267% ( 7) 00:07:37.480 15.065 - 15.163: 91.2568% ( 5) 00:07:37.480 15.163 - 15.262: 91.5170% ( 10) 00:07:37.480 15.262 - 15.360: 91.9074% ( 15) 00:07:37.480 15.360 - 15.458: 91.9854% ( 3) 00:07:37.480 15.458 - 15.557: 92.2977% ( 12) 00:07:37.480 15.557 - 15.655: 92.5319% ( 9) 00:07:37.480 15.655 - 15.754: 92.7140% ( 7) 00:07:37.480 15.754 - 15.852: 92.8962% ( 7) 00:07:37.480 15.852 - 15.951: 93.1304% ( 9) 00:07:37.480 15.951 - 16.049: 93.3385% ( 8) 00:07:37.480 16.049 - 16.148: 93.5207% ( 7) 00:07:37.480 16.148 - 16.246: 93.5988% ( 3) 00:07:37.480 16.246 - 16.345: 93.9370% ( 13) 00:07:37.480 16.345 - 16.443: 94.1452% ( 8) 00:07:37.480 16.443 - 16.542: 94.3534% ( 8) 00:07:37.480 16.542 - 16.640: 94.5095% ( 6) 00:07:37.480 16.640 - 16.738: 94.6656% ( 6) 00:07:37.480 16.738 - 16.837: 94.8218% ( 6) 00:07:37.480 16.837 - 16.935: 94.8478% ( 1) 00:07:37.480 16.935 - 17.034: 95.0039% ( 6) 00:07:37.480 17.034 - 17.132: 95.2381% ( 9) 00:07:37.480 17.132 - 17.231: 95.2901% ( 2) 00:07:37.480 17.231 - 17.329: 95.4463% ( 6) 00:07:37.480 17.329 - 17.428: 95.5243% ( 3) 00:07:37.480 17.428 - 17.526: 95.7325% ( 8) 00:07:37.480 17.526 - 17.625: 95.8626% ( 5) 00:07:37.480 17.625 - 17.723: 96.0448% ( 7) 00:07:37.480 17.723 - 17.822: 96.1749% ( 5) 00:07:37.480 17.822 - 17.920: 96.3570% ( 7) 00:07:37.480 17.920 - 18.018: 96.4351% ( 3) 00:07:37.480 18.018 - 18.117: 96.6693% ( 9) 00:07:37.480 18.117 - 18.215: 96.6953% ( 1) 00:07:37.480 18.215 - 18.314: 96.8254% ( 5) 00:07:37.480 18.314 - 18.412: 96.9815% ( 6) 00:07:37.480 18.412 - 18.511: 97.0336% ( 2) 00:07:37.480 18.511 - 18.609: 97.2157% ( 7) 00:07:37.480 18.609 - 18.708: 97.4239% ( 8) 00:07:37.480 18.708 - 18.806: 97.5800% ( 6) 00:07:37.480 18.806 - 18.905: 97.6321% ( 2) 00:07:37.480 18.905 - 19.003: 97.7361% ( 4) 00:07:37.480 19.003 - 19.102: 97.7882% ( 2) 00:07:37.480 19.102 - 19.200: 97.8402% ( 2) 00:07:37.480 19.200 - 19.298: 97.9703% ( 5) 00:07:37.480 19.298 - 19.397: 98.0484% ( 3) 00:07:37.480 19.397 - 19.495: 98.0744% ( 1) 00:07:37.480 19.495 - 19.594: 98.1525% ( 3) 00:07:37.480 19.594 - 19.692: 98.1785% ( 1) 00:07:37.480 19.692 - 19.791: 98.2566% ( 3) 00:07:37.480 19.791 - 19.889: 98.4387% ( 7) 00:07:37.480 19.889 - 19.988: 98.5948% ( 6) 00:07:37.480 19.988 - 20.086: 98.6469% ( 2) 00:07:37.480 20.086 - 20.185: 98.6729% ( 1) 00:07:37.480 20.185 - 20.283: 98.7250% ( 2) 00:07:37.480 20.283 - 20.382: 98.8030% ( 3) 00:07:37.480 20.382 - 20.480: 98.8551% ( 2) 00:07:37.480 20.480 - 20.578: 98.9331% ( 3) 00:07:37.480 20.677 - 20.775: 98.9852% ( 2) 00:07:37.480 20.874 - 20.972: 99.0112% ( 1) 00:07:37.480 20.972 - 21.071: 99.0372% ( 1) 00:07:37.480 21.071 - 21.169: 99.0632% ( 1) 00:07:37.480 21.760 - 21.858: 99.0893% ( 1) 00:07:37.480 22.055 - 22.154: 99.1153% ( 1) 00:07:37.480 22.843 - 22.942: 99.1413% ( 1) 00:07:37.480 22.942 - 23.040: 99.1673% ( 1) 00:07:37.480 23.138 - 23.237: 99.1933% ( 1) 00:07:37.480 23.237 - 23.335: 99.2194% ( 1) 00:07:37.480 23.631 - 23.729: 99.2454% ( 1) 00:07:37.480 23.729 - 23.828: 99.3234% ( 3) 00:07:37.480 23.828 - 23.926: 99.3495% ( 1) 00:07:37.480 24.615 - 24.714: 99.3755% ( 1) 00:07:37.480 25.403 - 25.600: 99.4275% ( 2) 00:07:37.480 26.191 - 26.388: 99.4536% ( 1) 00:07:37.480 26.388 - 26.585: 99.4796% ( 1) 00:07:37.480 27.175 - 27.372: 99.5056% ( 1) 00:07:37.480 28.554 - 28.751: 99.5576% ( 2) 00:07:37.480 29.932 - 30.129: 99.5837% ( 1) 00:07:37.480 30.129 - 30.326: 99.6097% ( 1) 00:07:37.480 31.902 - 32.098: 99.6357% ( 1) 00:07:37.480 36.234 - 36.431: 99.6617% ( 1) 00:07:37.480 36.628 - 36.825: 99.6877% ( 1) 00:07:37.480 36.825 - 37.022: 99.7138% ( 1) 00:07:37.480 37.809 - 38.006: 99.7398% ( 1) 00:07:37.480 38.006 - 38.203: 99.7658% ( 1) 00:07:37.480 39.385 - 39.582: 99.7918% ( 1) 00:07:37.480 44.308 - 44.505: 99.8179% ( 1) 00:07:37.480 45.883 - 46.080: 99.8439% ( 1) 00:07:37.480 53.169 - 53.563: 99.8699% ( 1) 00:07:37.480 55.926 - 56.320: 99.8959% ( 1) 00:07:37.480 67.348 - 67.742: 99.9219% ( 1) 00:07:37.480 68.135 - 68.529: 99.9480% ( 1) 00:07:37.480 81.526 - 81.920: 99.9740% ( 1) 00:07:37.480 108.702 - 109.489: 100.0000% ( 1) 00:07:37.480 00:07:37.480 Complete histogram 00:07:37.480 ================== 00:07:37.480 Range in us Cumulative Count 00:07:37.480 7.286 - 7.335: 0.3123% ( 12) 00:07:37.480 7.335 - 7.385: 2.8884% ( 99) 00:07:37.480 7.385 - 7.434: 9.5238% ( 255) 00:07:37.480 7.434 - 7.483: 21.2334% ( 450) 00:07:37.480 7.483 - 7.532: 35.1288% ( 534) 00:07:37.480 7.532 - 7.582: 48.1915% ( 502) 00:07:37.480 7.582 - 7.631: 59.1725% ( 422) 00:07:37.480 7.631 - 7.680: 66.0942% ( 266) 00:07:37.480 7.680 - 7.729: 70.9862% ( 188) 00:07:37.480 7.729 - 7.778: 73.4843% ( 96) 00:07:37.480 7.778 - 7.828: 75.3318% ( 71) 00:07:37.480 7.828 - 7.877: 76.2946% ( 37) 00:07:37.480 7.877 - 7.926: 77.3354% ( 40) 00:07:37.480 7.926 - 7.975: 78.6105% ( 49) 00:07:37.480 7.975 - 8.025: 79.8595% ( 48) 00:07:37.480 8.025 - 8.074: 80.7702% ( 35) 00:07:37.480 8.074 - 8.123: 81.8111% ( 40) 00:07:37.480 8.123 - 8.172: 82.8519% ( 40) 00:07:37.480 8.172 - 8.222: 83.9448% ( 42) 00:07:37.480 8.222 - 8.271: 84.6734% ( 28) 00:07:37.480 8.271 - 8.320: 85.4281% ( 29) 00:07:37.480 8.320 - 8.369: 85.9485% ( 20) 00:07:37.480 8.369 - 8.418: 86.4689% ( 20) 00:07:37.480 8.418 - 8.468: 86.8592% ( 15) 00:07:37.480 8.468 - 8.517: 87.2495% ( 15) 00:07:37.480 8.517 - 8.566: 87.5618% ( 12) 00:07:37.480 8.566 - 8.615: 87.6919% ( 5) 00:07:37.480 8.615 - 8.665: 88.0042% ( 12) 00:07:37.480 8.665 - 8.714: 88.1603% ( 6) 00:07:37.480 8.714 - 8.763: 88.2904% ( 5) 00:07:37.480 8.763 - 8.812: 88.5246% ( 9) 00:07:37.480 8.812 - 8.862: 88.5506% ( 1) 00:07:37.480 8.862 - 8.911: 88.6027% ( 2) 00:07:37.480 8.911 - 8.960: 88.7588% ( 6) 00:07:37.480 8.960 - 9.009: 88.7848% ( 1) 00:07:37.480 9.009 - 9.058: 88.8108% ( 1) 00:07:37.480 9.108 - 9.157: 88.8889% ( 3) 00:07:37.480 9.157 - 9.206: 89.0710% ( 7) 00:07:37.480 9.206 - 9.255: 89.3052% ( 9) 00:07:37.480 9.255 - 9.305: 89.7476% ( 17) 00:07:37.480 9.305 - 9.354: 90.3981% ( 25) 00:07:37.480 9.354 - 9.403: 91.0487% ( 25) 00:07:37.480 9.403 - 9.452: 91.6472% ( 23) 00:07:37.480 9.452 - 9.502: 92.2196% ( 22) 00:07:37.480 9.502 - 9.551: 92.9222% ( 27) 00:07:37.480 9.551 - 9.600: 93.4686% ( 21) 00:07:37.480 9.600 - 9.649: 94.0151% ( 21) 00:07:37.480 9.649 - 9.698: 94.3013% ( 11) 00:07:37.480 9.698 - 9.748: 94.5095% ( 8) 00:07:37.480 9.748 - 9.797: 94.6656% ( 6) 00:07:37.480 9.797 - 9.846: 94.8478% ( 7) 00:07:37.480 9.846 - 9.895: 95.0299% ( 7) 00:07:37.480 9.895 - 9.945: 95.2381% ( 8) 00:07:37.480 9.945 - 9.994: 95.3942% ( 6) 00:07:37.480 9.994 - 10.043: 95.4723% ( 3) 00:07:37.480 10.043 - 10.092: 95.4983% ( 1) 00:07:37.480 10.092 - 10.142: 95.5243% ( 1) 00:07:37.480 10.142 - 10.191: 95.5764% ( 2) 00:07:37.480 10.191 - 10.240: 95.6544% ( 3) 00:07:37.480 10.240 - 10.289: 95.7325% ( 3) 00:07:37.480 10.289 - 10.338: 95.8366% ( 4) 00:07:37.480 10.338 - 10.388: 95.8626% ( 1) 00:07:37.480 10.388 - 10.437: 95.9407% ( 3) 00:07:37.480 10.437 - 10.486: 96.0968% ( 6) 00:07:37.480 10.486 - 10.535: 96.1488% ( 2) 00:07:37.480 10.535 - 10.585: 96.2269% ( 3) 00:07:37.480 10.585 - 10.634: 96.3050% ( 3) 00:07:37.480 10.634 - 10.683: 96.3830% ( 3) 00:07:37.480 10.683 - 10.732: 96.4351% ( 2) 00:07:37.480 10.732 - 10.782: 96.5392% ( 4) 00:07:37.480 10.782 - 10.831: 96.6432% ( 4) 00:07:37.480 10.831 - 10.880: 96.7473% ( 4) 00:07:37.480 10.929 - 10.978: 96.7994% ( 2) 00:07:37.480 10.978 - 11.028: 96.8254% ( 1) 00:07:37.480 11.126 - 11.175: 96.8774% ( 2) 00:07:37.480 11.175 - 11.225: 96.9035% ( 1) 00:07:37.480 11.274 - 11.323: 96.9555% ( 2) 00:07:37.480 11.323 - 11.372: 96.9815% ( 1) 00:07:37.480 11.471 - 11.520: 97.0075% ( 1) 00:07:37.480 11.569 - 11.618: 97.0336% ( 1) 00:07:37.480 11.618 - 11.668: 97.0596% ( 1) 00:07:37.480 11.668 - 11.717: 97.0856% ( 1) 00:07:37.480 11.865 - 11.914: 97.1116% ( 1) 00:07:37.480 12.062 - 12.111: 97.1377% ( 1) 00:07:37.480 12.160 - 12.209: 97.1637% ( 1) 00:07:37.480 12.258 - 12.308: 97.1897% ( 1) 00:07:37.480 12.308 - 12.357: 97.2417% ( 2) 00:07:37.480 12.505 - 12.554: 97.2678% ( 1) 00:07:37.480 12.997 - 13.095: 97.3198% ( 2) 00:07:37.480 13.095 - 13.194: 97.3979% ( 3) 00:07:37.480 13.194 - 13.292: 97.4759% ( 3) 00:07:37.480 13.292 - 13.391: 97.5280% ( 2) 00:07:37.480 13.489 - 13.588: 97.6060% ( 3) 00:07:37.480 13.588 - 13.686: 97.6581% ( 2) 00:07:37.480 13.686 - 13.785: 97.7361% ( 3) 00:07:37.480 13.785 - 13.883: 97.9443% ( 8) 00:07:37.480 13.883 - 13.982: 97.9703% ( 1) 00:07:37.480 13.982 - 14.080: 97.9964% ( 1) 00:07:37.480 14.080 - 14.178: 98.1004% ( 4) 00:07:37.480 14.178 - 14.277: 98.1785% ( 3) 00:07:37.480 14.375 - 14.474: 98.2305% ( 2) 00:07:37.480 14.474 - 14.572: 98.2826% ( 2) 00:07:37.480 14.572 - 14.671: 98.3086% ( 1) 00:07:37.480 14.769 - 14.868: 98.3867% ( 3) 00:07:37.480 14.966 - 15.065: 98.4387% ( 2) 00:07:37.480 15.065 - 15.163: 98.5168% ( 3) 00:07:37.480 15.163 - 15.262: 98.5428% ( 1) 00:07:37.480 15.557 - 15.655: 98.5688% ( 1) 00:07:37.480 15.655 - 15.754: 98.6209% ( 2) 00:07:37.480 15.754 - 15.852: 98.6469% ( 1) 00:07:37.480 15.852 - 15.951: 98.6729% ( 1) 00:07:37.480 16.049 - 16.148: 98.6989% ( 1) 00:07:37.480 16.443 - 16.542: 98.7510% ( 2) 00:07:37.480 16.935 - 17.034: 98.7770% ( 1) 00:07:37.480 17.034 - 17.132: 98.8030% ( 1) 00:07:37.480 17.132 - 17.231: 98.8811% ( 3) 00:07:37.480 17.231 - 17.329: 98.9071% ( 1) 00:07:37.480 17.329 - 17.428: 98.9591% ( 2) 00:07:37.480 17.428 - 17.526: 98.9852% ( 1) 00:07:37.480 17.625 - 17.723: 99.0112% ( 1) 00:07:37.480 17.920 - 18.018: 99.0372% ( 1) 00:07:37.480 18.018 - 18.117: 99.0632% ( 1) 00:07:37.480 18.215 - 18.314: 99.1153% ( 2) 00:07:37.480 18.412 - 18.511: 99.1413% ( 1) 00:07:37.480 18.511 - 18.609: 99.1673% ( 1) 00:07:37.480 18.905 - 19.003: 99.1933% ( 1) 00:07:37.480 19.102 - 19.200: 99.2454% ( 2) 00:07:37.480 19.298 - 19.397: 99.2714% ( 1) 00:07:37.480 20.283 - 20.382: 99.2974% ( 1) 00:07:37.480 20.382 - 20.480: 99.3234% ( 1) 00:07:37.480 21.268 - 21.366: 99.3495% ( 1) 00:07:37.480 21.366 - 21.465: 99.4015% ( 2) 00:07:37.480 21.662 - 21.760: 99.4275% ( 1) 00:07:37.480 21.760 - 21.858: 99.4536% ( 1) 00:07:37.480 22.646 - 22.745: 99.4796% ( 1) 00:07:37.480 23.237 - 23.335: 99.5056% ( 1) 00:07:37.480 23.434 - 23.532: 99.5316% ( 1) 00:07:37.480 24.320 - 24.418: 99.5576% ( 1) 00:07:37.481 25.600 - 25.797: 99.5837% ( 1) 00:07:37.481 28.751 - 28.948: 99.6097% ( 1) 00:07:37.481 35.446 - 35.643: 99.6357% ( 1) 00:07:37.481 38.400 - 38.597: 99.6617% ( 1) 00:07:37.481 41.157 - 41.354: 99.6877% ( 1) 00:07:37.481 43.914 - 44.111: 99.7138% ( 1) 00:07:37.481 46.671 - 46.868: 99.7398% ( 1) 00:07:37.481 49.428 - 49.625: 99.7658% ( 1) 00:07:37.481 52.775 - 53.169: 99.7918% ( 1) 00:07:37.481 53.169 - 53.563: 99.8179% ( 1) 00:07:37.481 54.351 - 54.745: 99.8439% ( 1) 00:07:37.481 57.108 - 57.502: 99.8699% ( 1) 00:07:37.481 59.077 - 59.471: 99.8959% ( 1) 00:07:37.481 63.015 - 63.409: 99.9219% ( 1) 00:07:37.481 66.560 - 66.954: 99.9480% ( 1) 00:07:37.481 116.578 - 117.366: 99.9740% ( 1) 00:07:37.481 349.735 - 351.311: 100.0000% ( 1) 00:07:37.481 00:07:37.481 ************************************ 00:07:37.481 END TEST nvme_overhead 00:07:37.481 ************************************ 00:07:37.481 00:07:37.481 real 0m1.224s 00:07:37.481 user 0m1.067s 00:07:37.481 sys 0m0.103s 00:07:37.481 17:24:10 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.481 17:24:10 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:37.481 17:24:10 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:37.481 17:24:10 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:37.481 17:24:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.481 17:24:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.481 ************************************ 00:07:37.481 START TEST nvme_arbitration 00:07:37.481 ************************************ 00:07:37.481 17:24:10 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:40.775 Initializing NVMe Controllers 00:07:40.775 Attached to 0000:00:10.0 00:07:40.775 Attached to 0000:00:11.0 00:07:40.775 Attached to 0000:00:13.0 00:07:40.775 Attached to 0000:00:12.0 00:07:40.775 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:40.775 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:40.775 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:40.775 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:40.776 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:40.776 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:40.776 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:40.776 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:40.776 Initialization complete. Launching workers. 00:07:40.776 Starting thread on core 1 with urgent priority queue 00:07:40.776 Starting thread on core 2 with urgent priority queue 00:07:40.776 Starting thread on core 3 with urgent priority queue 00:07:40.776 Starting thread on core 0 with urgent priority queue 00:07:40.776 QEMU NVMe Ctrl (12340 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:40.776 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:40.776 QEMU NVMe Ctrl (12341 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:07:40.776 QEMU NVMe Ctrl (12342 ) core 1: 810.67 IO/s 123.36 secs/100000 ios 00:07:40.776 QEMU NVMe Ctrl (12343 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:40.776 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:40.776 ======================================================== 00:07:40.776 00:07:40.776 ************************************ 00:07:40.776 END TEST nvme_arbitration 00:07:40.776 ************************************ 00:07:40.776 00:07:40.776 real 0m3.337s 00:07:40.776 user 0m9.265s 00:07:40.776 sys 0m0.122s 00:07:40.776 17:24:13 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.776 17:24:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:40.776 17:24:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:40.776 17:24:13 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.776 17:24:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.776 17:24:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.776 ************************************ 00:07:40.776 START TEST nvme_single_aen 00:07:40.776 ************************************ 00:07:40.776 17:24:13 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:41.035 Asynchronous Event Request test 00:07:41.035 Attached to 0000:00:10.0 00:07:41.035 Attached to 0000:00:11.0 00:07:41.035 Attached to 0000:00:13.0 00:07:41.035 Attached to 0000:00:12.0 00:07:41.035 Reset controller to setup AER completions for this process 00:07:41.035 Registering asynchronous event callbacks... 00:07:41.035 Getting orig temperature thresholds of all controllers 00:07:41.035 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.035 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.035 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.035 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:41.035 Setting all controllers temperature threshold low to trigger AER 00:07:41.035 Waiting for all controllers temperature threshold to be set lower 00:07:41.035 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.035 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:41.035 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.035 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:41.035 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.035 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:41.035 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:41.035 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:41.035 Waiting for all controllers to trigger AER and reset threshold 00:07:41.035 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.035 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.035 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.035 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:41.035 Cleaning up... 00:07:41.035 ************************************ 00:07:41.035 END TEST nvme_single_aen 00:07:41.035 ************************************ 00:07:41.035 00:07:41.035 real 0m0.233s 00:07:41.035 user 0m0.081s 00:07:41.035 sys 0m0.098s 00:07:41.035 17:24:14 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.035 17:24:14 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:41.035 17:24:14 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:41.035 17:24:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.035 17:24:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.035 17:24:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.035 ************************************ 00:07:41.035 START TEST nvme_doorbell_aers 00:07:41.035 ************************************ 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.035 17:24:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:41.296 [2024-12-07 17:24:14.516624] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:07:51.301 Executing: test_write_invalid_db 00:07:51.301 Waiting for AER completion... 00:07:51.301 Failure: test_write_invalid_db 00:07:51.301 00:07:51.301 Executing: test_invalid_db_write_overflow_sq 00:07:51.301 Waiting for AER completion... 00:07:51.301 Failure: test_invalid_db_write_overflow_sq 00:07:51.301 00:07:51.301 Executing: test_invalid_db_write_overflow_cq 00:07:51.301 Waiting for AER completion... 00:07:51.301 Failure: test_invalid_db_write_overflow_cq 00:07:51.301 00:07:51.301 17:24:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:51.301 17:24:24 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:51.301 [2024-12-07 17:24:24.552427] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:01.273 Executing: test_write_invalid_db 00:08:01.273 Waiting for AER completion... 00:08:01.273 Failure: test_write_invalid_db 00:08:01.273 00:08:01.273 Executing: test_invalid_db_write_overflow_sq 00:08:01.273 Waiting for AER completion... 00:08:01.273 Failure: test_invalid_db_write_overflow_sq 00:08:01.273 00:08:01.273 Executing: test_invalid_db_write_overflow_cq 00:08:01.273 Waiting for AER completion... 00:08:01.273 Failure: test_invalid_db_write_overflow_cq 00:08:01.273 00:08:01.273 17:24:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:01.273 17:24:34 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:01.273 [2024-12-07 17:24:34.585546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:11.233 Executing: test_write_invalid_db 00:08:11.233 Waiting for AER completion... 00:08:11.233 Failure: test_write_invalid_db 00:08:11.233 00:08:11.233 Executing: test_invalid_db_write_overflow_sq 00:08:11.233 Waiting for AER completion... 00:08:11.233 Failure: test_invalid_db_write_overflow_sq 00:08:11.233 00:08:11.233 Executing: test_invalid_db_write_overflow_cq 00:08:11.233 Waiting for AER completion... 00:08:11.233 Failure: test_invalid_db_write_overflow_cq 00:08:11.233 00:08:11.233 17:24:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.233 17:24:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:11.490 [2024-12-07 17:24:44.620312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 Executing: test_write_invalid_db 00:08:21.457 Waiting for AER completion... 00:08:21.457 Failure: test_write_invalid_db 00:08:21.457 00:08:21.457 Executing: test_invalid_db_write_overflow_sq 00:08:21.457 Waiting for AER completion... 00:08:21.457 Failure: test_invalid_db_write_overflow_sq 00:08:21.457 00:08:21.457 Executing: test_invalid_db_write_overflow_cq 00:08:21.457 Waiting for AER completion... 00:08:21.457 Failure: test_invalid_db_write_overflow_cq 00:08:21.457 00:08:21.457 00:08:21.457 real 0m40.194s 00:08:21.457 user 0m34.081s 00:08:21.457 sys 0m5.712s 00:08:21.457 17:24:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.457 17:24:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:21.457 ************************************ 00:08:21.457 END TEST nvme_doorbell_aers 00:08:21.457 ************************************ 00:08:21.457 17:24:54 nvme -- nvme/nvme.sh@97 -- # uname 00:08:21.457 17:24:54 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:21.457 17:24:54 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.457 17:24:54 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:21.457 17:24:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.457 17:24:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.457 ************************************ 00:08:21.457 START TEST nvme_multi_aen 00:08:21.457 ************************************ 00:08:21.457 17:24:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:21.457 [2024-12-07 17:24:54.672659] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.672828] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.672841] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.674014] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.674042] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.674050] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.674971] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.675003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.675011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.675877] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.675897] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 [2024-12-07 17:24:54.675904] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63193) is not found. Dropping the request. 00:08:21.457 Child process pid: 63714 00:08:21.715 [Child] Asynchronous Event Request test 00:08:21.715 [Child] Attached to 0000:00:10.0 00:08:21.715 [Child] Attached to 0000:00:11.0 00:08:21.716 [Child] Attached to 0000:00:13.0 00:08:21.716 [Child] Attached to 0000:00:12.0 00:08:21.716 [Child] Registering asynchronous event callbacks... 00:08:21.716 [Child] Getting orig temperature thresholds of all controllers 00:08:21.716 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:21.716 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 [Child] Cleaning up... 00:08:21.716 Asynchronous Event Request test 00:08:21.716 Attached to 0000:00:10.0 00:08:21.716 Attached to 0000:00:11.0 00:08:21.716 Attached to 0000:00:13.0 00:08:21.716 Attached to 0000:00:12.0 00:08:21.716 Reset controller to setup AER completions for this process 00:08:21.716 Registering asynchronous event callbacks... 00:08:21.716 Getting orig temperature thresholds of all controllers 00:08:21.716 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.716 Setting all controllers temperature threshold low to trigger AER 00:08:21.716 Waiting for all controllers temperature threshold to be set lower 00:08:21.716 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:21.716 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:21.716 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:21.716 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.716 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:21.716 Waiting for all controllers to trigger AER and reset threshold 00:08:21.716 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.716 Cleaning up... 00:08:21.716 00:08:21.716 real 0m0.436s 00:08:21.716 user 0m0.137s 00:08:21.716 sys 0m0.185s 00:08:21.716 17:24:54 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.716 17:24:54 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:21.716 ************************************ 00:08:21.716 END TEST nvme_multi_aen 00:08:21.716 ************************************ 00:08:21.716 17:24:54 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:21.716 17:24:54 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:21.716 17:24:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.716 17:24:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.716 ************************************ 00:08:21.716 START TEST nvme_startup 00:08:21.716 ************************************ 00:08:21.716 17:24:54 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:21.987 Initializing NVMe Controllers 00:08:21.987 Attached to 0000:00:10.0 00:08:21.987 Attached to 0000:00:11.0 00:08:21.987 Attached to 0000:00:13.0 00:08:21.987 Attached to 0000:00:12.0 00:08:21.987 Initialization complete. 00:08:21.987 Time used:141539.734 (us). 00:08:21.987 00:08:21.987 real 0m0.203s 00:08:21.987 user 0m0.077s 00:08:21.987 sys 0m0.084s 00:08:21.987 17:24:55 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.987 17:24:55 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:21.987 ************************************ 00:08:21.987 END TEST nvme_startup 00:08:21.987 ************************************ 00:08:21.987 17:24:55 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:21.987 17:24:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.987 17:24:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.987 17:24:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.987 ************************************ 00:08:21.987 START TEST nvme_multi_secondary 00:08:21.987 ************************************ 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63770 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63771 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:21.987 17:24:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:25.342 Initializing NVMe Controllers 00:08:25.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:25.342 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:25.342 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:25.342 Initialization complete. Launching workers. 00:08:25.342 ======================================================== 00:08:25.342 Latency(us) 00:08:25.342 Device Information : IOPS MiB/s Average min max 00:08:25.342 PCIE (0000:00:10.0) NSID 1 from core 2: 2021.00 7.89 7914.84 1431.96 33293.48 00:08:25.342 PCIE (0000:00:11.0) NSID 1 from core 2: 2021.00 7.89 7916.66 1443.00 28947.37 00:08:25.342 PCIE (0000:00:13.0) NSID 1 from core 2: 2021.00 7.89 7916.60 1305.89 28110.75 00:08:25.342 PCIE (0000:00:12.0) NSID 1 from core 2: 2021.00 7.89 7917.07 1444.67 26504.92 00:08:25.342 PCIE (0000:00:12.0) NSID 2 from core 2: 2021.00 7.89 7917.39 1443.01 27451.53 00:08:25.342 PCIE (0000:00:12.0) NSID 3 from core 2: 2021.00 7.89 7917.84 1311.14 29106.94 00:08:25.342 ======================================================== 00:08:25.342 Total : 12125.97 47.37 7916.73 1305.89 33293.48 00:08:25.342 00:08:25.342 Initializing NVMe Controllers 00:08:25.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.342 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:25.342 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:25.342 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:25.342 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:25.342 Initialization complete. Launching workers. 00:08:25.342 ======================================================== 00:08:25.342 Latency(us) 00:08:25.342 Device Information : IOPS MiB/s Average min max 00:08:25.342 PCIE (0000:00:10.0) NSID 1 from core 1: 4904.81 19.16 3260.46 792.93 12194.94 00:08:25.342 PCIE (0000:00:11.0) NSID 1 from core 1: 4904.81 19.16 3261.58 824.19 11987.49 00:08:25.342 PCIE (0000:00:13.0) NSID 1 from core 1: 4904.81 19.16 3261.49 819.98 12582.42 00:08:25.342 PCIE (0000:00:12.0) NSID 1 from core 1: 4904.81 19.16 3261.47 826.36 11064.77 00:08:25.342 PCIE (0000:00:12.0) NSID 2 from core 1: 4904.81 19.16 3261.41 808.85 13065.86 00:08:25.342 PCIE (0000:00:12.0) NSID 3 from core 1: 4904.81 19.16 3261.33 819.30 12039.16 00:08:25.342 ======================================================== 00:08:25.342 Total : 29428.84 114.96 3261.29 792.93 13065.86 00:08:25.342 00:08:25.342 17:24:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63770 00:08:27.236 Initializing NVMe Controllers 00:08:27.236 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:27.236 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:27.236 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:27.236 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:27.236 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:27.236 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:27.236 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:27.236 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:27.236 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:27.236 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:27.236 Initialization complete. Launching workers. 00:08:27.236 ======================================================== 00:08:27.236 Latency(us) 00:08:27.236 Device Information : IOPS MiB/s Average min max 00:08:27.236 PCIE (0000:00:10.0) NSID 1 from core 0: 9321.05 36.41 1715.27 690.47 12154.39 00:08:27.236 PCIE (0000:00:11.0) NSID 1 from core 0: 9321.05 36.41 1716.14 697.81 12658.06 00:08:27.236 PCIE (0000:00:13.0) NSID 1 from core 0: 9321.05 36.41 1716.11 698.41 13140.08 00:08:27.236 PCIE (0000:00:12.0) NSID 1 from core 0: 9321.05 36.41 1716.08 701.03 12791.50 00:08:27.236 PCIE (0000:00:12.0) NSID 2 from core 0: 9321.05 36.41 1716.06 655.10 13794.14 00:08:27.236 PCIE (0000:00:12.0) NSID 3 from core 0: 9321.05 36.41 1716.04 628.00 11685.30 00:08:27.236 ======================================================== 00:08:27.237 Total : 55926.30 218.46 1715.95 628.00 13794.14 00:08:27.237 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63771 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63840 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63841 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:27.237 17:25:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:30.514 Initializing NVMe Controllers 00:08:30.514 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.514 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:30.514 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:30.514 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:30.514 Initialization complete. Launching workers. 00:08:30.514 ======================================================== 00:08:30.514 Latency(us) 00:08:30.514 Device Information : IOPS MiB/s Average min max 00:08:30.514 PCIE (0000:00:10.0) NSID 1 from core 1: 7428.05 29.02 2152.70 735.78 6796.15 00:08:30.514 PCIE (0000:00:11.0) NSID 1 from core 1: 7428.05 29.02 2153.72 757.57 6857.32 00:08:30.514 PCIE (0000:00:13.0) NSID 1 from core 1: 7428.05 29.02 2153.73 760.77 7405.21 00:08:30.514 PCIE (0000:00:12.0) NSID 1 from core 1: 7428.05 29.02 2153.68 755.12 7380.10 00:08:30.514 PCIE (0000:00:12.0) NSID 2 from core 1: 7428.05 29.02 2153.64 757.65 6485.79 00:08:30.514 PCIE (0000:00:12.0) NSID 3 from core 1: 7428.05 29.02 2153.61 755.23 7030.56 00:08:30.514 ======================================================== 00:08:30.514 Total : 44568.27 174.09 2153.52 735.78 7405.21 00:08:30.514 00:08:30.514 Initializing NVMe Controllers 00:08:30.514 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.514 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.514 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:30.514 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:30.514 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:30.514 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:30.514 Initialization complete. Launching workers. 00:08:30.514 ======================================================== 00:08:30.514 Latency(us) 00:08:30.514 Device Information : IOPS MiB/s Average min max 00:08:30.514 PCIE (0000:00:10.0) NSID 1 from core 0: 7732.17 30.20 2067.97 708.81 6619.13 00:08:30.514 PCIE (0000:00:11.0) NSID 1 from core 0: 7732.17 30.20 2068.99 729.08 6583.99 00:08:30.514 PCIE (0000:00:13.0) NSID 1 from core 0: 7732.17 30.20 2069.07 721.41 6593.34 00:08:30.514 PCIE (0000:00:12.0) NSID 1 from core 0: 7732.17 30.20 2069.14 739.87 6782.26 00:08:30.514 PCIE (0000:00:12.0) NSID 2 from core 0: 7732.17 30.20 2069.21 735.97 6684.88 00:08:30.514 PCIE (0000:00:12.0) NSID 3 from core 0: 7732.17 30.20 2069.30 735.42 6616.16 00:08:30.514 ======================================================== 00:08:30.514 Total : 46393.01 181.22 2068.95 708.81 6782.26 00:08:30.514 00:08:33.061 Initializing NVMe Controllers 00:08:33.061 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.061 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.061 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.061 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.061 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.061 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.061 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.061 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.061 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.061 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.061 Initialization complete. Launching workers. 00:08:33.061 ======================================================== 00:08:33.061 Latency(us) 00:08:33.061 Device Information : IOPS MiB/s Average min max 00:08:33.061 PCIE (0000:00:10.0) NSID 1 from core 2: 4891.67 19.11 3268.80 737.85 16519.87 00:08:33.061 PCIE (0000:00:11.0) NSID 1 from core 2: 4891.67 19.11 3270.45 705.15 12733.18 00:08:33.061 PCIE (0000:00:13.0) NSID 1 from core 2: 4891.67 19.11 3270.40 759.13 14190.58 00:08:33.061 PCIE (0000:00:12.0) NSID 1 from core 2: 4891.67 19.11 3270.17 750.75 12673.44 00:08:33.061 PCIE (0000:00:12.0) NSID 2 from core 2: 4891.67 19.11 3269.94 762.23 12747.96 00:08:33.061 PCIE (0000:00:12.0) NSID 3 from core 2: 4891.67 19.11 3269.22 750.30 16376.92 00:08:33.061 ======================================================== 00:08:33.061 Total : 29350.01 114.65 3269.83 705.15 16519.87 00:08:33.061 00:08:33.061 ************************************ 00:08:33.061 END TEST nvme_multi_secondary 00:08:33.061 ************************************ 00:08:33.061 17:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63840 00:08:33.061 17:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63841 00:08:33.061 00:08:33.061 real 0m10.720s 00:08:33.061 user 0m18.340s 00:08:33.061 sys 0m0.697s 00:08:33.061 17:25:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.061 17:25:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:33.061 17:25:05 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:33.061 17:25:05 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:33.061 17:25:05 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62797 ]] 00:08:33.061 17:25:05 nvme -- common/autotest_common.sh@1094 -- # kill 62797 00:08:33.061 17:25:05 nvme -- common/autotest_common.sh@1095 -- # wait 62797 00:08:33.061 [2024-12-07 17:25:05.956380] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.956454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.956484] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.956503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.959473] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.959515] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.959527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.959538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.961118] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.961152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.961163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.961174] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.962730] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.962768] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.962779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 [2024-12-07 17:25:05.962790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63713) is not found. Dropping the request. 00:08:33.061 17:25:06 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:33.061 17:25:06 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:33.061 17:25:06 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.061 17:25:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.061 17:25:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.061 17:25:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.061 ************************************ 00:08:33.061 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:33.061 ************************************ 00:08:33.061 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:33.061 * Looking for test storage... 00:08:33.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:33.061 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:33.061 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:33.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.062 --rc genhtml_branch_coverage=1 00:08:33.062 --rc genhtml_function_coverage=1 00:08:33.062 --rc genhtml_legend=1 00:08:33.062 --rc geninfo_all_blocks=1 00:08:33.062 --rc geninfo_unexecuted_blocks=1 00:08:33.062 00:08:33.062 ' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:33.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.062 --rc genhtml_branch_coverage=1 00:08:33.062 --rc genhtml_function_coverage=1 00:08:33.062 --rc genhtml_legend=1 00:08:33.062 --rc geninfo_all_blocks=1 00:08:33.062 --rc geninfo_unexecuted_blocks=1 00:08:33.062 00:08:33.062 ' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:33.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.062 --rc genhtml_branch_coverage=1 00:08:33.062 --rc genhtml_function_coverage=1 00:08:33.062 --rc genhtml_legend=1 00:08:33.062 --rc geninfo_all_blocks=1 00:08:33.062 --rc geninfo_unexecuted_blocks=1 00:08:33.062 00:08:33.062 ' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:33.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.062 --rc genhtml_branch_coverage=1 00:08:33.062 --rc genhtml_function_coverage=1 00:08:33.062 --rc genhtml_legend=1 00:08:33.062 --rc geninfo_all_blocks=1 00:08:33.062 --rc geninfo_unexecuted_blocks=1 00:08:33.062 00:08:33.062 ' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:33.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64006 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64006 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64006 ']' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.062 17:25:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:33.062 [2024-12-07 17:25:06.372550] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:08:33.062 [2024-12-07 17:25:06.372663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64006 ] 00:08:33.321 [2024-12-07 17:25:06.541108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.321 [2024-12-07 17:25:06.638961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.321 [2024-12-07 17:25:06.639169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.321 [2024-12-07 17:25:06.639344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.321 [2024-12-07 17:25:06.639366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.887 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.887 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:33.887 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:33.887 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.887 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.146 nvme0n1 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jtoK2.txt 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:34.146 true 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733592307 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64029 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:34.146 17:25:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.048 [2024-12-07 17:25:09.324384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:36.048 [2024-12-07 17:25:09.324646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:36.048 [2024-12-07 17:25:09.324669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:36.048 [2024-12-07 17:25:09.324682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:36.048 [2024-12-07 17:25:09.326513] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64029 00:08:36.048 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64029 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64029 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jtoK2.txt 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jtoK2.txt 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64006 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64006 ']' 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64006 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.048 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64006 00:08:36.307 killing process with pid 64006 00:08:36.307 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.307 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.307 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64006' 00:08:36.307 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64006 00:08:36.307 17:25:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64006 00:08:37.244 17:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:37.244 17:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:37.244 ************************************ 00:08:37.244 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:37.244 ************************************ 00:08:37.244 00:08:37.244 real 0m4.496s 00:08:37.244 user 0m15.959s 00:08:37.244 sys 0m0.472s 00:08:37.244 17:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.244 17:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 17:25:10 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:37.502 17:25:10 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:37.502 17:25:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.502 17:25:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.502 17:25:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 ************************************ 00:08:37.502 START TEST nvme_fio 00:08:37.502 ************************************ 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:37.502 17:25:10 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:37.502 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:37.761 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:37.761 17:25:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:38.019 17:25:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:38.019 17:25:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:38.019 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:38.020 17:25:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:38.020 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:38.020 fio-3.35 00:08:38.020 Starting 1 thread 00:08:44.604 00:08:44.604 test: (groupid=0, jobs=1): err= 0: pid=64164: Sat Dec 7 17:25:16 2024 00:08:44.604 read: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec) 00:08:44.604 slat (nsec): min=3846, max=63325, avg=5543.55, stdev=2411.42 00:08:44.604 clat (usec): min=264, max=8113, avg=2855.27, stdev=889.05 00:08:44.604 lat (usec): min=269, max=8118, avg=2860.82, stdev=890.54 00:08:44.604 clat percentiles (usec): 00:08:44.604 | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:08:44.604 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:08:44.604 | 70.00th=[ 2671], 80.00th=[ 2868], 90.00th=[ 3687], 95.00th=[ 5276], 00:08:44.604 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7504], 99.95th=[ 7570], 00:08:44.604 | 99.99th=[ 7898] 00:08:44.604 bw ( KiB/s): min=82144, max=95008, per=99.58%, avg=89185.00, stdev=6517.92, samples=3 00:08:44.604 iops : min=20536, max=23752, avg=22296.00, stdev=1629.41, samples=3 00:08:44.604 write: IOPS=22.2k, BW=86.9MiB/s (91.1MB/s)(174MiB/2001msec); 0 zone resets 00:08:44.604 slat (nsec): min=4029, max=62985, avg=6001.38, stdev=2583.63 00:08:44.604 clat (usec): min=476, max=8392, avg=2855.65, stdev=888.78 00:08:44.604 lat (usec): min=482, max=8408, avg=2861.65, stdev=890.34 00:08:44.604 clat percentiles (usec): 00:08:44.604 | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:08:44.605 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:08:44.605 | 70.00th=[ 2671], 80.00th=[ 2868], 90.00th=[ 3687], 95.00th=[ 5276], 00:08:44.605 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7635], 00:08:44.605 | 99.99th=[ 8029] 00:08:44.605 bw ( KiB/s): min=82368, max=95808, per=100.00%, avg=89358.67, stdev=6736.33, samples=3 00:08:44.605 iops : min=20592, max=23952, avg=22339.67, stdev=1684.08, samples=3 00:08:44.605 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:08:44.605 lat (msec) : 2=1.77%, 4=89.57%, 10=8.61% 00:08:44.605 cpu : usr=99.25%, sys=0.05%, ctx=3, majf=0, minf=607 00:08:44.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:44.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.605 issued rwts: total=44801,44499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.605 00:08:44.605 Run status group 0 (all jobs): 00:08:44.605 READ: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:08:44.605 WRITE: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=174MiB (182MB), run=2001-2001msec 00:08:44.605 ----------------------------------------------------- 00:08:44.605 Suppressions used: 00:08:44.605 count bytes template 00:08:44.605 1 32 /usr/src/fio/parse.c 00:08:44.605 1 8 libtcmalloc_minimal.so 00:08:44.605 ----------------------------------------------------- 00:08:44.605 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:44.605 17:25:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:44.605 17:25:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:44.605 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:44.605 fio-3.35 00:08:44.605 Starting 1 thread 00:08:51.161 00:08:51.161 test: (groupid=0, jobs=1): err= 0: pid=64226: Sat Dec 7 17:25:23 2024 00:08:51.161 read: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(171MiB/2001msec) 00:08:51.161 slat (nsec): min=3849, max=77550, avg=5733.08, stdev=2375.84 00:08:51.161 clat (usec): min=227, max=9016, avg=2923.17, stdev=879.29 00:08:51.161 lat (usec): min=232, max=9021, avg=2928.90, stdev=880.64 00:08:51.161 clat percentiles (usec): 00:08:51.161 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:08:51.161 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2606], 60.00th=[ 2638], 00:08:51.161 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 3720], 95.00th=[ 5276], 00:08:51.161 | 99.00th=[ 6390], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[ 8094], 00:08:51.161 | 99.99th=[ 8455] 00:08:51.161 bw ( KiB/s): min=84152, max=89208, per=98.48%, avg=85968.00, stdev=2812.76, samples=3 00:08:51.161 iops : min=21040, max=22300, avg=21492.00, stdev=701.39, samples=3 00:08:51.161 write: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(169MiB/2001msec); 0 zone resets 00:08:51.161 slat (nsec): min=4177, max=65594, avg=6250.37, stdev=2428.68 00:08:51.161 clat (usec): min=209, max=8984, avg=2935.14, stdev=897.12 00:08:51.161 lat (usec): min=215, max=8990, avg=2941.39, stdev=898.53 00:08:51.161 clat percentiles (usec): 00:08:51.161 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:08:51.161 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2606], 60.00th=[ 2638], 00:08:51.161 | 70.00th=[ 2704], 80.00th=[ 2966], 90.00th=[ 3785], 95.00th=[ 5342], 00:08:51.161 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 8094], 00:08:51.161 | 99.99th=[ 8586] 00:08:51.161 bw ( KiB/s): min=84080, max=88856, per=99.36%, avg=86130.67, stdev=2458.44, samples=3 00:08:51.161 iops : min=21020, max=22214, avg=21532.67, stdev=614.61, samples=3 00:08:51.161 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:08:51.161 lat (msec) : 2=0.70%, 4=90.39%, 10=8.85% 00:08:51.161 cpu : usr=99.20%, sys=0.00%, ctx=5, majf=0, minf=607 00:08:51.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:51.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.162 issued rwts: total=43670,43365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.162 00:08:51.162 Run status group 0 (all jobs): 00:08:51.162 READ: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:08:51.162 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=169MiB (178MB), run=2001-2001msec 00:08:51.162 ----------------------------------------------------- 00:08:51.162 Suppressions used: 00:08:51.162 count bytes template 00:08:51.162 1 32 /usr/src/fio/parse.c 00:08:51.162 1 8 libtcmalloc_minimal.so 00:08:51.162 ----------------------------------------------------- 00:08:51.162 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:51.162 17:25:23 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:51.162 17:25:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:51.162 17:25:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:51.162 17:25:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:51.162 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:51.162 fio-3.35 00:08:51.162 Starting 1 thread 00:08:57.717 00:08:57.717 test: (groupid=0, jobs=1): err= 0: pid=64281: Sat Dec 7 17:25:30 2024 00:08:57.717 read: IOPS=22.7k, BW=88.5MiB/s (92.8MB/s)(177MiB/2001msec) 00:08:57.717 slat (nsec): min=3427, max=93288, avg=5451.34, stdev=2517.20 00:08:57.717 clat (usec): min=454, max=9061, avg=2820.50, stdev=913.57 00:08:57.717 lat (usec): min=464, max=9070, avg=2825.95, stdev=915.20 00:08:57.717 clat percentiles (usec): 00:08:57.717 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2376], 00:08:57.717 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:08:57.717 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3687], 95.00th=[ 5407], 00:08:57.717 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7963], 99.95th=[ 8455], 00:08:57.717 | 99.99th=[ 8717] 00:08:57.717 bw ( KiB/s): min=82536, max=98120, per=100.00%, avg=92330.67, stdev=8529.20, samples=3 00:08:57.717 iops : min=20634, max=24530, avg=23082.67, stdev=2132.30, samples=3 00:08:57.717 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(176MiB/2001msec); 0 zone resets 00:08:57.717 slat (usec): min=3, max=128, avg= 5.84, stdev= 2.65 00:08:57.717 clat (usec): min=531, max=8811, avg=2821.88, stdev=911.54 00:08:57.717 lat (usec): min=545, max=8817, avg=2827.72, stdev=913.21 00:08:57.717 clat percentiles (usec): 00:08:57.717 | 1.00th=[ 2040], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:08:57.717 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:08:57.717 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3687], 95.00th=[ 5407], 00:08:57.717 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 7832], 99.95th=[ 8455], 00:08:57.717 | 99.99th=[ 8586] 00:08:57.717 bw ( KiB/s): min=83616, max=97648, per=100.00%, avg=92424.00, stdev=7671.90, samples=3 00:08:57.717 iops : min=20904, max=24412, avg=23106.67, stdev=1918.44, samples=3 00:08:57.717 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:57.717 lat (msec) : 2=0.80%, 4=90.93%, 10=8.26% 00:08:57.717 cpu : usr=99.20%, sys=0.10%, ctx=21, majf=0, minf=607 00:08:57.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.717 issued rwts: total=45347,45083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.717 00:08:57.717 Run status group 0 (all jobs): 00:08:57.717 READ: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=177MiB (186MB), run=2001-2001msec 00:08:57.717 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=176MiB (185MB), run=2001-2001msec 00:08:57.976 ----------------------------------------------------- 00:08:57.976 Suppressions used: 00:08:57.976 count bytes template 00:08:57.976 1 32 /usr/src/fio/parse.c 00:08:57.976 1 8 libtcmalloc_minimal.so 00:08:57.976 ----------------------------------------------------- 00:08:57.976 00:08:57.976 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.976 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.976 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:57.976 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.234 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:58.234 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.234 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:58.234 17:25:31 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:58.234 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:58.493 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:58.493 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:58.493 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:58.493 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:58.493 17:25:31 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:58.493 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:58.493 fio-3.35 00:08:58.493 Starting 1 thread 00:09:08.491 00:09:08.491 test: (groupid=0, jobs=1): err= 0: pid=64342: Sat Dec 7 17:25:40 2024 00:09:08.491 read: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(169MiB/2001msec) 00:09:08.491 slat (nsec): min=3841, max=95263, avg=5780.15, stdev=2478.16 00:09:08.491 clat (usec): min=230, max=7779, avg=2954.77, stdev=840.69 00:09:08.491 lat (usec): min=236, max=7784, avg=2960.55, stdev=842.23 00:09:08.491 clat percentiles (usec): 00:09:08.491 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:08.491 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:08.491 | 70.00th=[ 2737], 80.00th=[ 2999], 90.00th=[ 3884], 95.00th=[ 5145], 00:09:08.491 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7308], 00:09:08.491 | 99.99th=[ 7635] 00:09:08.491 bw ( KiB/s): min=79384, max=89568, per=96.96%, avg=83872.00, stdev=5198.36, samples=3 00:09:08.491 iops : min=19846, max=22392, avg=20968.00, stdev=1299.59, samples=3 00:09:08.491 write: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec); 0 zone resets 00:09:08.491 slat (nsec): min=4168, max=63277, avg=6306.32, stdev=2487.48 00:09:08.491 clat (usec): min=301, max=7882, avg=2959.08, stdev=848.53 00:09:08.491 lat (usec): min=307, max=7887, avg=2965.38, stdev=850.05 00:09:08.491 clat percentiles (usec): 00:09:08.491 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2573], 00:09:08.491 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:08.491 | 70.00th=[ 2737], 80.00th=[ 2999], 90.00th=[ 3884], 95.00th=[ 5211], 00:09:08.491 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 7046], 99.95th=[ 7308], 00:09:08.491 | 99.99th=[ 7635] 00:09:08.491 bw ( KiB/s): min=79256, max=89664, per=97.80%, avg=83968.00, stdev=5273.31, samples=3 00:09:08.491 iops : min=19814, max=22416, avg=20992.00, stdev=1318.33, samples=3 00:09:08.491 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:08.491 lat (msec) : 2=0.69%, 4=90.39%, 10=8.88% 00:09:08.491 cpu : usr=99.20%, sys=0.05%, ctx=2, majf=0, minf=606 00:09:08.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.491 issued rwts: total=43274,42951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.491 00:09:08.491 Run status group 0 (all jobs): 00:09:08.491 READ: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=169MiB (177MB), run=2001-2001msec 00:09:08.491 WRITE: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:08.491 ----------------------------------------------------- 00:09:08.491 Suppressions used: 00:09:08.491 count bytes template 00:09:08.491 1 32 /usr/src/fio/parse.c 00:09:08.491 1 8 libtcmalloc_minimal.so 00:09:08.491 ----------------------------------------------------- 00:09:08.491 00:09:08.491 17:25:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.491 17:25:40 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:08.491 00:09:08.491 real 0m30.321s 00:09:08.491 user 0m16.687s 00:09:08.491 sys 0m25.843s 00:09:08.491 17:25:40 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.491 ************************************ 00:09:08.491 END TEST nvme_fio 00:09:08.491 ************************************ 00:09:08.491 17:25:40 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:08.491 ************************************ 00:09:08.491 END TEST nvme 00:09:08.491 ************************************ 00:09:08.491 00:09:08.491 real 1m39.626s 00:09:08.491 user 3m36.971s 00:09:08.491 sys 0m36.426s 00:09:08.491 17:25:41 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.491 17:25:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:08.491 17:25:41 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:08.491 17:25:41 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:08.491 17:25:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.491 17:25:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.491 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:09:08.491 ************************************ 00:09:08.491 START TEST nvme_scc 00:09:08.491 ************************************ 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:08.491 * Looking for test storage... 00:09:08.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.491 --rc genhtml_branch_coverage=1 00:09:08.491 --rc genhtml_function_coverage=1 00:09:08.491 --rc genhtml_legend=1 00:09:08.491 --rc geninfo_all_blocks=1 00:09:08.491 --rc geninfo_unexecuted_blocks=1 00:09:08.491 00:09:08.491 ' 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.491 --rc genhtml_branch_coverage=1 00:09:08.491 --rc genhtml_function_coverage=1 00:09:08.491 --rc genhtml_legend=1 00:09:08.491 --rc geninfo_all_blocks=1 00:09:08.491 --rc geninfo_unexecuted_blocks=1 00:09:08.491 00:09:08.491 ' 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.491 --rc genhtml_branch_coverage=1 00:09:08.491 --rc genhtml_function_coverage=1 00:09:08.491 --rc genhtml_legend=1 00:09:08.491 --rc geninfo_all_blocks=1 00:09:08.491 --rc geninfo_unexecuted_blocks=1 00:09:08.491 00:09:08.491 ' 00:09:08.491 17:25:41 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.491 --rc genhtml_branch_coverage=1 00:09:08.491 --rc genhtml_function_coverage=1 00:09:08.491 --rc genhtml_legend=1 00:09:08.491 --rc geninfo_all_blocks=1 00:09:08.491 --rc geninfo_unexecuted_blocks=1 00:09:08.491 00:09:08.491 ' 00:09:08.491 17:25:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.491 17:25:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.491 17:25:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:08.491 17:25:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:08.491 17:25:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.491 17:25:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.492 17:25:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.492 17:25:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.492 17:25:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.492 17:25:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:08.492 17:25:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:08.492 17:25:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:08.492 17:25:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.492 17:25:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:08.492 17:25:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:08.492 17:25:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:08.492 17:25:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:08.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.492 Waiting for block devices as requested 00:09:08.492 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.492 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.492 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.749 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.029 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:14.029 17:25:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:14.029 17:25:46 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:14.029 17:25:46 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:14.029 17:25:46 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.029 17:25:46 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.029 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:14.030 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.031 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.032 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:14.033 17:25:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:14.033 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.034 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.035 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:14.036 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.037 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:14.038 17:25:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:14.038 17:25:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:14.038 17:25:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.038 17:25:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:14.038 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:14.039 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:14.040 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:14.041 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:14.042 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.043 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.044 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.045 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:14.046 17:25:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:14.046 17:25:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:14.046 17:25:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.046 17:25:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.046 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:14.047 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.048 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.049 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:14.050 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.051 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.052 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.053 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.054 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.055 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:14.056 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.057 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.058 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.059 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.060 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:14.061 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.062 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:14.063 17:25:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:14.063 17:25:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:14.063 17:25:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.063 17:25:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.063 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.064 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.065 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.066 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:14.067 17:25:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:14.067 17:25:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:14.067 17:25:47 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:14.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.892 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.892 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.892 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.151 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.151 17:25:48 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:15.151 17:25:48 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:15.151 17:25:48 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.151 17:25:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:15.151 ************************************ 00:09:15.151 START TEST nvme_simple_copy 00:09:15.151 ************************************ 00:09:15.151 17:25:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:15.409 Initializing NVMe Controllers 00:09:15.409 Attaching to 0000:00:10.0 00:09:15.409 Controller supports SCC. Attached to 0000:00:10.0 00:09:15.409 Namespace ID: 1 size: 6GB 00:09:15.409 Initialization complete. 00:09:15.409 00:09:15.409 Controller QEMU NVMe Ctrl (12340 ) 00:09:15.409 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:15.409 Namespace Block Size:4096 00:09:15.409 Writing LBAs 0 to 63 with Random Data 00:09:15.409 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:15.409 LBAs matching Written Data: 64 00:09:15.409 00:09:15.409 real 0m0.251s 00:09:15.409 user 0m0.085s 00:09:15.409 sys 0m0.066s 00:09:15.409 17:25:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.409 17:25:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:15.409 ************************************ 00:09:15.409 END TEST nvme_simple_copy 00:09:15.409 ************************************ 00:09:15.409 ************************************ 00:09:15.409 END TEST nvme_scc 00:09:15.409 ************************************ 00:09:15.409 00:09:15.409 real 0m7.592s 00:09:15.409 user 0m1.106s 00:09:15.409 sys 0m1.377s 00:09:15.409 17:25:48 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.409 17:25:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:15.409 17:25:48 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:15.409 17:25:48 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:15.409 17:25:48 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:15.409 17:25:48 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:15.409 17:25:48 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:15.409 17:25:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.410 17:25:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.410 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:09:15.410 ************************************ 00:09:15.410 START TEST nvme_fdp 00:09:15.410 ************************************ 00:09:15.410 17:25:48 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:15.410 * Looking for test storage... 00:09:15.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.410 17:25:48 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.410 17:25:48 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.410 17:25:48 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.668 17:25:48 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:15.669 17:25:48 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.669 17:25:48 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.669 --rc genhtml_branch_coverage=1 00:09:15.669 --rc genhtml_function_coverage=1 00:09:15.669 --rc genhtml_legend=1 00:09:15.669 --rc geninfo_all_blocks=1 00:09:15.669 --rc geninfo_unexecuted_blocks=1 00:09:15.669 00:09:15.669 ' 00:09:15.669 17:25:48 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.669 --rc genhtml_branch_coverage=1 00:09:15.669 --rc genhtml_function_coverage=1 00:09:15.669 --rc genhtml_legend=1 00:09:15.669 --rc geninfo_all_blocks=1 00:09:15.669 --rc geninfo_unexecuted_blocks=1 00:09:15.669 00:09:15.669 ' 00:09:15.669 17:25:48 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.669 --rc genhtml_branch_coverage=1 00:09:15.669 --rc genhtml_function_coverage=1 00:09:15.669 --rc genhtml_legend=1 00:09:15.669 --rc geninfo_all_blocks=1 00:09:15.669 --rc geninfo_unexecuted_blocks=1 00:09:15.669 00:09:15.669 ' 00:09:15.669 17:25:48 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.669 --rc genhtml_branch_coverage=1 00:09:15.669 --rc genhtml_function_coverage=1 00:09:15.669 --rc genhtml_legend=1 00:09:15.669 --rc geninfo_all_blocks=1 00:09:15.669 --rc geninfo_unexecuted_blocks=1 00:09:15.669 00:09:15.669 ' 00:09:15.669 17:25:48 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.669 17:25:48 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.669 17:25:48 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.669 17:25:48 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.669 17:25:48 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.669 17:25:48 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:15.669 17:25:48 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:15.669 17:25:48 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:15.669 17:25:48 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.669 17:25:48 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:15.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.928 Waiting for block devices as requested 00:09:16.187 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.187 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.187 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:16.187 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.480 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:21.480 17:25:54 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:21.480 17:25:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.480 17:25:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.480 17:25:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.480 17:25:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:21.480 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:21.481 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:21.482 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:21.483 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:21.484 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:21.485 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.486 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.487 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:21.488 17:25:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.488 17:25:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.488 17:25:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.488 17:25:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.488 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.489 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.490 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:21.491 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.492 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:21.493 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.494 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:21.495 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:21.496 17:25:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.496 17:25:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.496 17:25:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.496 17:25:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.496 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.497 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:21.498 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.499 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.500 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.501 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.502 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.503 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.504 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.505 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.506 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.507 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.508 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.509 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.773 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.774 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:21.775 17:25:54 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:21.775 17:25:54 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.775 17:25:54 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.775 17:25:54 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.775 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:21.776 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.777 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:21.778 17:25:54 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:21.778 17:25:54 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:21.778 17:25:54 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.608 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.608 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.608 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.608 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:22.869 17:25:56 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:22.869 17:25:56 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:22.869 17:25:56 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.869 17:25:56 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:22.869 ************************************ 00:09:22.869 START TEST nvme_flexible_data_placement 00:09:22.869 ************************************ 00:09:22.869 17:25:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:23.129 Initializing NVMe Controllers 00:09:23.129 Attaching to 0000:00:13.0 00:09:23.129 Controller supports FDP Attached to 0000:00:13.0 00:09:23.129 Namespace ID: 1 Endurance Group ID: 1 00:09:23.129 Initialization complete. 00:09:23.129 00:09:23.129 ================================== 00:09:23.129 == FDP tests for Namespace: #01 == 00:09:23.129 ================================== 00:09:23.129 00:09:23.129 Get Feature: FDP: 00:09:23.129 ================= 00:09:23.129 Enabled: Yes 00:09:23.129 FDP configuration Index: 0 00:09:23.129 00:09:23.129 FDP configurations log page 00:09:23.129 =========================== 00:09:23.129 Number of FDP configurations: 1 00:09:23.129 Version: 0 00:09:23.129 Size: 112 00:09:23.129 FDP Configuration Descriptor: 0 00:09:23.129 Descriptor Size: 96 00:09:23.129 Reclaim Group Identifier format: 2 00:09:23.129 FDP Volatile Write Cache: Not Present 00:09:23.129 FDP Configuration: Valid 00:09:23.129 Vendor Specific Size: 0 00:09:23.129 Number of Reclaim Groups: 2 00:09:23.129 Number of Recalim Unit Handles: 8 00:09:23.129 Max Placement Identifiers: 128 00:09:23.129 Number of Namespaces Suppprted: 256 00:09:23.129 Reclaim unit Nominal Size: 6000000 bytes 00:09:23.129 Estimated Reclaim Unit Time Limit: Not Reported 00:09:23.129 RUH Desc #000: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #001: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #002: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #003: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #004: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #005: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #006: RUH Type: Initially Isolated 00:09:23.129 RUH Desc #007: RUH Type: Initially Isolated 00:09:23.129 00:09:23.129 FDP reclaim unit handle usage log page 00:09:23.129 ====================================== 00:09:23.129 Number of Reclaim Unit Handles: 8 00:09:23.129 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:23.129 RUH Usage Desc #001: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #002: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #003: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #004: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #005: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #006: RUH Attributes: Unused 00:09:23.129 RUH Usage Desc #007: RUH Attributes: Unused 00:09:23.129 00:09:23.129 FDP statistics log page 00:09:23.129 ======================= 00:09:23.129 Host bytes with metadata written: 1145249792 00:09:23.129 Media bytes with metadata written: 1145458688 00:09:23.129 Media bytes erased: 0 00:09:23.129 00:09:23.129 FDP Reclaim unit handle status 00:09:23.129 ============================== 00:09:23.129 Number of RUHS descriptors: 2 00:09:23.129 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003bce 00:09:23.129 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:23.129 00:09:23.129 FDP write on placement id: 0 success 00:09:23.129 00:09:23.129 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:23.129 00:09:23.129 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:23.129 00:09:23.129 Get Feature: FDP Events for Placement handle: #0 00:09:23.129 ======================== 00:09:23.129 Number of FDP Events: 6 00:09:23.129 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:23.129 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:23.129 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:23.129 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:23.129 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:23.129 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:23.129 00:09:23.129 FDP events log page 00:09:23.129 =================== 00:09:23.129 Number of FDP events: 1 00:09:23.129 FDP Event #0: 00:09:23.129 Event Type: RU Not Written to Capacity 00:09:23.129 Placement Identifier: Valid 00:09:23.129 NSID: Valid 00:09:23.129 Location: Valid 00:09:23.129 Placement Identifier: 0 00:09:23.129 Event Timestamp: 7 00:09:23.129 Namespace Identifier: 1 00:09:23.129 Reclaim Group Identifier: 0 00:09:23.129 Reclaim Unit Handle Identifier: 0 00:09:23.129 00:09:23.129 FDP test passed 00:09:23.129 00:09:23.129 real 0m0.253s 00:09:23.129 user 0m0.072s 00:09:23.129 sys 0m0.079s 00:09:23.129 17:25:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.129 17:25:56 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:23.129 ************************************ 00:09:23.129 END TEST nvme_flexible_data_placement 00:09:23.129 ************************************ 00:09:23.129 00:09:23.129 real 0m7.676s 00:09:23.129 user 0m1.089s 00:09:23.129 sys 0m1.424s 00:09:23.129 ************************************ 00:09:23.129 END TEST nvme_fdp 00:09:23.129 ************************************ 00:09:23.129 17:25:56 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.130 17:25:56 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:23.130 17:25:56 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:23.130 17:25:56 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:23.130 17:25:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.130 17:25:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.130 17:25:56 -- common/autotest_common.sh@10 -- # set +x 00:09:23.130 ************************************ 00:09:23.130 START TEST nvme_rpc 00:09:23.130 ************************************ 00:09:23.130 17:25:56 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:23.130 * Looking for test storage... 00:09:23.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.391 17:25:56 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.391 --rc genhtml_branch_coverage=1 00:09:23.391 --rc genhtml_function_coverage=1 00:09:23.391 --rc genhtml_legend=1 00:09:23.391 --rc geninfo_all_blocks=1 00:09:23.391 --rc geninfo_unexecuted_blocks=1 00:09:23.391 00:09:23.391 ' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.391 --rc genhtml_branch_coverage=1 00:09:23.391 --rc genhtml_function_coverage=1 00:09:23.391 --rc genhtml_legend=1 00:09:23.391 --rc geninfo_all_blocks=1 00:09:23.391 --rc geninfo_unexecuted_blocks=1 00:09:23.391 00:09:23.391 ' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.391 --rc genhtml_branch_coverage=1 00:09:23.391 --rc genhtml_function_coverage=1 00:09:23.391 --rc genhtml_legend=1 00:09:23.391 --rc geninfo_all_blocks=1 00:09:23.391 --rc geninfo_unexecuted_blocks=1 00:09:23.391 00:09:23.391 ' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.391 --rc genhtml_branch_coverage=1 00:09:23.391 --rc genhtml_function_coverage=1 00:09:23.391 --rc genhtml_legend=1 00:09:23.391 --rc geninfo_all_blocks=1 00:09:23.391 --rc geninfo_unexecuted_blocks=1 00:09:23.391 00:09:23.391 ' 00:09:23.391 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:23.391 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.391 17:25:56 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:23.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.391 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:23.392 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65726 00:09:23.392 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:23.392 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:23.392 17:25:56 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65726 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65726 ']' 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.392 17:25:56 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.392 [2024-12-07 17:25:56.753208] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:23.392 [2024-12-07 17:25:56.753352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65726 ] 00:09:23.654 [2024-12-07 17:25:56.914647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.915 [2024-12-07 17:25:57.037973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.915 [2024-12-07 17:25:57.038087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.487 17:25:57 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.487 17:25:57 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:24.487 17:25:57 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:24.748 Nvme0n1 00:09:24.748 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:24.748 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:25.009 request: 00:09:25.009 { 00:09:25.009 "bdev_name": "Nvme0n1", 00:09:25.009 "filename": "non_existing_file", 00:09:25.009 "method": "bdev_nvme_apply_firmware", 00:09:25.009 "req_id": 1 00:09:25.009 } 00:09:25.009 Got JSON-RPC error response 00:09:25.009 response: 00:09:25.009 { 00:09:25.009 "code": -32603, 00:09:25.009 "message": "open file failed." 00:09:25.009 } 00:09:25.009 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:25.009 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:25.009 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:25.270 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:25.270 17:25:58 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65726 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65726 ']' 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65726 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65726 00:09:25.270 killing process with pid 65726 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65726' 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65726 00:09:25.270 17:25:58 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65726 00:09:27.181 ************************************ 00:09:27.181 END TEST nvme_rpc 00:09:27.181 ************************************ 00:09:27.181 00:09:27.181 real 0m3.669s 00:09:27.181 user 0m6.845s 00:09:27.181 sys 0m0.664s 00:09:27.181 17:26:00 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.181 17:26:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.181 17:26:00 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:27.181 17:26:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.181 17:26:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.181 17:26:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.181 ************************************ 00:09:27.181 START TEST nvme_rpc_timeouts 00:09:27.181 ************************************ 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:27.181 * Looking for test storage... 00:09:27.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.181 17:26:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.181 --rc genhtml_branch_coverage=1 00:09:27.181 --rc genhtml_function_coverage=1 00:09:27.181 --rc genhtml_legend=1 00:09:27.181 --rc geninfo_all_blocks=1 00:09:27.181 --rc geninfo_unexecuted_blocks=1 00:09:27.181 00:09:27.181 ' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.181 --rc genhtml_branch_coverage=1 00:09:27.181 --rc genhtml_function_coverage=1 00:09:27.181 --rc genhtml_legend=1 00:09:27.181 --rc geninfo_all_blocks=1 00:09:27.181 --rc geninfo_unexecuted_blocks=1 00:09:27.181 00:09:27.181 ' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.181 --rc genhtml_branch_coverage=1 00:09:27.181 --rc genhtml_function_coverage=1 00:09:27.181 --rc genhtml_legend=1 00:09:27.181 --rc geninfo_all_blocks=1 00:09:27.181 --rc geninfo_unexecuted_blocks=1 00:09:27.181 00:09:27.181 ' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.181 --rc genhtml_branch_coverage=1 00:09:27.181 --rc genhtml_function_coverage=1 00:09:27.181 --rc genhtml_legend=1 00:09:27.181 --rc geninfo_all_blocks=1 00:09:27.181 --rc geninfo_unexecuted_blocks=1 00:09:27.181 00:09:27.181 ' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65791 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65791 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65829 00:09:27.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65829 00:09:27.181 17:26:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65829 ']' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.181 17:26:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:27.181 [2024-12-07 17:26:00.378541] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:09:27.181 [2024-12-07 17:26:00.378797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65829 ] 00:09:27.181 [2024-12-07 17:26:00.535307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.439 [2024-12-07 17:26:00.613575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.439 [2024-12-07 17:26:00.613654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.005 17:26:01 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.005 Checking default timeout settings: 00:09:28.005 17:26:01 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:28.005 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:28.005 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:28.263 Making settings changes with rpc: 00:09:28.263 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:28.263 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:28.521 Check default vs. modified settings: 00:09:28.521 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:28.521 17:26:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65791 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65791 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.779 Setting action_on_timeout is changed as expected. 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65791 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65791 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.779 Setting timeout_us is changed as expected. 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65791 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:28.779 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65791 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:28.780 Setting timeout_admin_us is changed as expected. 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65791 /tmp/settings_modified_65791 00:09:28.780 17:26:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65829 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65829 ']' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65829 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65829 00:09:28.780 killing process with pid 65829 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65829' 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65829 00:09:28.780 17:26:02 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65829 00:09:30.156 RPC TIMEOUT SETTING TEST PASSED. 00:09:30.156 17:26:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:30.156 00:09:30.156 real 0m3.161s 00:09:30.156 user 0m6.188s 00:09:30.156 sys 0m0.482s 00:09:30.156 ************************************ 00:09:30.156 END TEST nvme_rpc_timeouts 00:09:30.156 ************************************ 00:09:30.156 17:26:03 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.156 17:26:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:30.156 17:26:03 -- spdk/autotest.sh@239 -- # uname -s 00:09:30.156 17:26:03 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:30.156 17:26:03 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:30.156 17:26:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.156 17:26:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.156 17:26:03 -- common/autotest_common.sh@10 -- # set +x 00:09:30.156 ************************************ 00:09:30.156 START TEST sw_hotplug 00:09:30.156 ************************************ 00:09:30.156 17:26:03 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:30.156 * Looking for test storage... 00:09:30.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.156 17:26:03 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.156 17:26:03 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.157 17:26:03 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.157 --rc genhtml_branch_coverage=1 00:09:30.157 --rc genhtml_function_coverage=1 00:09:30.157 --rc genhtml_legend=1 00:09:30.157 --rc geninfo_all_blocks=1 00:09:30.157 --rc geninfo_unexecuted_blocks=1 00:09:30.157 00:09:30.157 ' 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.157 --rc genhtml_branch_coverage=1 00:09:30.157 --rc genhtml_function_coverage=1 00:09:30.157 --rc genhtml_legend=1 00:09:30.157 --rc geninfo_all_blocks=1 00:09:30.157 --rc geninfo_unexecuted_blocks=1 00:09:30.157 00:09:30.157 ' 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.157 --rc genhtml_branch_coverage=1 00:09:30.157 --rc genhtml_function_coverage=1 00:09:30.157 --rc genhtml_legend=1 00:09:30.157 --rc geninfo_all_blocks=1 00:09:30.157 --rc geninfo_unexecuted_blocks=1 00:09:30.157 00:09:30.157 ' 00:09:30.157 17:26:03 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.157 --rc genhtml_branch_coverage=1 00:09:30.157 --rc genhtml_function_coverage=1 00:09:30.157 --rc genhtml_legend=1 00:09:30.157 --rc geninfo_all_blocks=1 00:09:30.157 --rc geninfo_unexecuted_blocks=1 00:09:30.157 00:09:30.157 ' 00:09:30.157 17:26:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:30.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.728 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:30.728 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:30.728 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:30.728 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:30.728 17:26:04 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:30.728 17:26:04 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.297 Waiting for block devices as requested 00:09:31.297 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.558 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.558 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.850 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.850 17:26:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:36.850 17:26:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.110 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:37.111 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.111 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:37.377 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:37.638 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.638 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.638 17:26:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:37.638 17:26:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:37.899 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:37.899 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:37.899 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66682 00:09:37.899 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:37.900 17:26:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:37.900 17:26:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:37.900 17:26:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:37.900 17:26:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:37.900 17:26:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:37.900 17:26:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:37.900 Initializing NVMe Controllers 00:09:37.900 Attaching to 0000:00:10.0 00:09:37.900 Attaching to 0000:00:11.0 00:09:37.900 Attached to 0000:00:10.0 00:09:37.900 Attached to 0000:00:11.0 00:09:37.900 Initialization complete. Starting I/O... 00:09:37.900 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:37.900 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:37.900 00:09:39.289 QEMU NVMe Ctrl (12340 ): 2264 I/Os completed (+2264) 00:09:39.289 QEMU NVMe Ctrl (12341 ): 2264 I/Os completed (+2264) 00:09:39.289 00:09:40.236 QEMU NVMe Ctrl (12340 ): 4980 I/Os completed (+2716) 00:09:40.236 QEMU NVMe Ctrl (12341 ): 4987 I/Os completed (+2723) 00:09:40.236 00:09:41.181 QEMU NVMe Ctrl (12340 ): 7784 I/Os completed (+2804) 00:09:41.181 QEMU NVMe Ctrl (12341 ): 7791 I/Os completed (+2804) 00:09:41.181 00:09:42.124 QEMU NVMe Ctrl (12340 ): 10580 I/Os completed (+2796) 00:09:42.124 QEMU NVMe Ctrl (12341 ): 10587 I/Os completed (+2796) 00:09:42.124 00:09:43.062 QEMU NVMe Ctrl (12340 ): 13560 I/Os completed (+2980) 00:09:43.062 QEMU NVMe Ctrl (12341 ): 13572 I/Os completed (+2985) 00:09:43.062 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.005 [2024-12-07 17:26:17.034265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:44.005 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:44.005 [2024-12-07 17:26:17.035713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.035796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.035815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.035834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:44.005 [2024-12-07 17:26:17.038124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.038187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.038203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.038218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:44.005 [2024-12-07 17:26:17.067074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:44.005 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:44.005 [2024-12-07 17:26:17.068270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.068326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.068348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.068364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:44.005 [2024-12-07 17:26:17.070235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.070325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.070345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 [2024-12-07 17:26:17.070359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:44.005 Attaching to 0000:00:10.0 00:09:44.005 Attached to 0000:00:10.0 00:09:44.005 QEMU NVMe Ctrl (12340 ): 32 I/Os completed (+32) 00:09:44.005 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:44.005 17:26:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:44.005 Attaching to 0000:00:11.0 00:09:44.005 Attached to 0000:00:11.0 00:09:44.949 QEMU NVMe Ctrl (12340 ): 2777 I/Os completed (+2745) 00:09:44.949 QEMU NVMe Ctrl (12341 ): 2568 I/Os completed (+2568) 00:09:44.949 00:09:45.950 QEMU NVMe Ctrl (12340 ): 5517 I/Os completed (+2740) 00:09:45.950 QEMU NVMe Ctrl (12341 ): 5312 I/Os completed (+2744) 00:09:45.950 00:09:46.884 QEMU NVMe Ctrl (12340 ): 9039 I/Os completed (+3522) 00:09:46.884 QEMU NVMe Ctrl (12341 ): 8839 I/Os completed (+3527) 00:09:46.884 00:09:48.259 QEMU NVMe Ctrl (12340 ): 12703 I/Os completed (+3664) 00:09:48.259 QEMU NVMe Ctrl (12341 ): 12499 I/Os completed (+3660) 00:09:48.259 00:09:49.202 QEMU NVMe Ctrl (12340 ): 15971 I/Os completed (+3268) 00:09:49.202 QEMU NVMe Ctrl (12341 ): 15710 I/Os completed (+3211) 00:09:49.202 00:09:50.139 QEMU NVMe Ctrl (12340 ): 19962 I/Os completed (+3991) 00:09:50.139 QEMU NVMe Ctrl (12341 ): 20178 I/Os completed (+4468) 00:09:50.139 00:09:51.073 QEMU NVMe Ctrl (12340 ): 24516 I/Os completed (+4554) 00:09:51.073 QEMU NVMe Ctrl (12341 ): 24783 I/Os completed (+4605) 00:09:51.073 00:09:52.006 QEMU NVMe Ctrl (12340 ): 28159 I/Os completed (+3643) 00:09:52.006 QEMU NVMe Ctrl (12341 ): 28457 I/Os completed (+3674) 00:09:52.006 00:09:52.939 QEMU NVMe Ctrl (12340 ): 31910 I/Os completed (+3751) 00:09:52.939 QEMU NVMe Ctrl (12341 ): 32038 I/Os completed (+3581) 00:09:52.939 00:09:53.891 QEMU NVMe Ctrl (12340 ): 35402 I/Os completed (+3492) 00:09:53.891 QEMU NVMe Ctrl (12341 ): 35543 I/Os completed (+3505) 00:09:53.891 00:09:55.278 QEMU NVMe Ctrl (12340 ): 38418 I/Os completed (+3016) 00:09:55.278 QEMU NVMe Ctrl (12341 ): 38715 I/Os completed (+3172) 00:09:55.278 00:09:56.225 QEMU NVMe Ctrl (12340 ): 40963 I/Os completed (+2545) 00:09:56.225 QEMU NVMe Ctrl (12341 ): 41247 I/Os completed (+2532) 00:09:56.225 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:56.225 [2024-12-07 17:26:29.323197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:56.225 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:56.225 [2024-12-07 17:26:29.324917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.325017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.325039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.325061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:56.225 [2024-12-07 17:26:29.327848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.327934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.327956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.327976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:56.225 [2024-12-07 17:26:29.346496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:56.225 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:56.225 [2024-12-07 17:26:29.347805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.347872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.347900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.347918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:56.225 [2024-12-07 17:26:29.350025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.350082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.350102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 [2024-12-07 17:26:29.350120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:56.225 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:56.225 EAL: Scan for (pci) bus failed. 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:56.225 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:56.225 Attaching to 0000:00:10.0 00:09:56.225 Attached to 0000:00:10.0 00:09:56.487 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:56.487 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.487 17:26:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:56.487 Attaching to 0000:00:11.0 00:09:56.487 Attached to 0000:00:11.0 00:09:57.060 QEMU NVMe Ctrl (12340 ): 2588 I/Os completed (+2588) 00:09:57.060 QEMU NVMe Ctrl (12341 ): 2280 I/Os completed (+2280) 00:09:57.060 00:09:58.002 QEMU NVMe Ctrl (12340 ): 6376 I/Os completed (+3788) 00:09:58.002 QEMU NVMe Ctrl (12341 ): 6028 I/Os completed (+3748) 00:09:58.002 00:09:58.941 QEMU NVMe Ctrl (12340 ): 10162 I/Os completed (+3786) 00:09:58.941 QEMU NVMe Ctrl (12341 ): 9823 I/Os completed (+3795) 00:09:58.941 00:09:59.900 QEMU NVMe Ctrl (12340 ): 13927 I/Os completed (+3765) 00:09:59.900 QEMU NVMe Ctrl (12341 ): 13554 I/Os completed (+3731) 00:09:59.900 00:10:01.283 QEMU NVMe Ctrl (12340 ): 17766 I/Os completed (+3839) 00:10:01.283 QEMU NVMe Ctrl (12341 ): 17389 I/Os completed (+3835) 00:10:01.283 00:10:02.225 QEMU NVMe Ctrl (12340 ): 21593 I/Os completed (+3827) 00:10:02.225 QEMU NVMe Ctrl (12341 ): 21220 I/Os completed (+3831) 00:10:02.225 00:10:03.167 QEMU NVMe Ctrl (12340 ): 24557 I/Os completed (+2964) 00:10:03.167 QEMU NVMe Ctrl (12341 ): 24191 I/Os completed (+2971) 00:10:03.167 00:10:04.107 QEMU NVMe Ctrl (12340 ): 28277 I/Os completed (+3720) 00:10:04.107 QEMU NVMe Ctrl (12341 ): 27847 I/Os completed (+3656) 00:10:04.107 00:10:05.047 QEMU NVMe Ctrl (12340 ): 32098 I/Os completed (+3821) 00:10:05.048 QEMU NVMe Ctrl (12341 ): 31654 I/Os completed (+3807) 00:10:05.048 00:10:06.041 QEMU NVMe Ctrl (12340 ): 35994 I/Os completed (+3896) 00:10:06.041 QEMU NVMe Ctrl (12341 ): 35550 I/Os completed (+3896) 00:10:06.041 00:10:06.983 QEMU NVMe Ctrl (12340 ): 39850 I/Os completed (+3856) 00:10:06.983 QEMU NVMe Ctrl (12341 ): 39406 I/Os completed (+3856) 00:10:06.983 00:10:07.928 QEMU NVMe Ctrl (12340 ): 43532 I/Os completed (+3682) 00:10:07.928 QEMU NVMe Ctrl (12341 ): 43078 I/Os completed (+3672) 00:10:07.928 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.500 [2024-12-07 17:26:41.655130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:08.500 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:08.500 [2024-12-07 17:26:41.656091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.656127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.656143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.656158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:08.500 [2024-12-07 17:26:41.657945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.658000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.658013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.658028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.500 [2024-12-07 17:26:41.678305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:08.500 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:08.500 [2024-12-07 17:26:41.679207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.679242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.679256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.679270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:08.500 [2024-12-07 17:26:41.680669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.680701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.680716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 [2024-12-07 17:26:41.680727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:08.500 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:08.500 EAL: Scan for (pci) bus failed. 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.500 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:08.761 Attaching to 0000:00:10.0 00:10:08.761 Attached to 0000:00:10.0 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:08.761 17:26:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:08.761 Attaching to 0000:00:11.0 00:10:08.761 Attached to 0000:00:11.0 00:10:08.761 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:08.761 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:08.761 [2024-12-07 17:26:41.999109] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:20.983 17:26:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:20.983 17:26:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:20.983 17:26:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.95 00:10:20.983 17:26:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.95 00:10:20.983 17:26:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:20.983 17:26:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:10:20.983 17:26:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:10:20.983 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 17:26:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66682 00:10:27.571 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66682) - No such process 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66682 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67232 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67232 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67232 ']' 00:10:27.571 17:26:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.571 17:26:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.571 [2024-12-07 17:27:00.088213] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:10:27.571 [2024-12-07 17:27:00.088372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67232 ] 00:10:27.571 [2024-12-07 17:27:00.254815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.571 [2024-12-07 17:27:00.376573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:27.833 17:27:01 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:27.833 17:27:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.400 17:27:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.400 17:27:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.400 17:27:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:34.400 [2024-12-07 17:27:07.178200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.400 [2024-12-07 17:27:07.179595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.179631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 [2024-12-07 17:27:07.179662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.179670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.179679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 [2024-12-07 17:27:07.179686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.179694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.179700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 [2024-12-07 17:27:07.179712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.179718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.179726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.400 17:27:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.400 17:27:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.400 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.400 [2024-12-07 17:27:07.678192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.400 [2024-12-07 17:27:07.679330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.679357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 [2024-12-07 17:27:07.679379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.679388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.679395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.400 [2024-12-07 17:27:07.679403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.400 [2024-12-07 17:27:07.679410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.400 [2024-12-07 17:27:07.679418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.401 [2024-12-07 17:27:07.679425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.401 [2024-12-07 17:27:07.679432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:34.401 [2024-12-07 17:27:07.679439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.401 17:27:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.401 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:34.401 17:27:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.964 17:27:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.964 17:27:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.964 17:27:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.964 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:35.222 17:27:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.420 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.420 17:27:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.421 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:47.421 17:27:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:47.421 [2024-12-07 17:27:20.578363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:47.421 [2024-12-07 17:27:20.579612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.421 [2024-12-07 17:27:20.579713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.421 [2024-12-07 17:27:20.579769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.421 [2024-12-07 17:27:20.579804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.421 [2024-12-07 17:27:20.579821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.421 [2024-12-07 17:27:20.579874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.421 [2024-12-07 17:27:20.579900] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.421 [2024-12-07 17:27:20.579917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.421 [2024-12-07 17:27:20.579970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.421 [2024-12-07 17:27:20.580012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.421 [2024-12-07 17:27:20.580028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.421 [2024-12-07 17:27:20.580052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.678 [2024-12-07 17:27:20.978359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:47.678 [2024-12-07 17:27:20.979612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.678 [2024-12-07 17:27:20.979709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.678 [2024-12-07 17:27:20.979771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.678 [2024-12-07 17:27:20.979802] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.678 [2024-12-07 17:27:20.979848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.678 [2024-12-07 17:27:20.979875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.678 [2024-12-07 17:27:20.979927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.678 [2024-12-07 17:27:20.979945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.679 [2024-12-07 17:27:20.980004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.679 [2024-12-07 17:27:20.980035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:47.679 [2024-12-07 17:27:20.980053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:47.679 [2024-12-07 17:27:20.980101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:47.937 17:27:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.937 17:27:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:47.937 17:27:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:47.937 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.196 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.196 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.196 17:27:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.404 17:27:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.404 17:27:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 17:27:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.404 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.405 17:27:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.405 17:27:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.405 17:27:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:00.405 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:00.405 [2024-12-07 17:27:33.478548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.405 [2024-12-07 17:27:33.479857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.405 [2024-12-07 17:27:33.479958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.405 [2024-12-07 17:27:33.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.405 [2024-12-07 17:27:33.480124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.405 [2024-12-07 17:27:33.480145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.405 [2024-12-07 17:27:33.480180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.405 [2024-12-07 17:27:33.480206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.405 [2024-12-07 17:27:33.480260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.405 [2024-12-07 17:27:33.480287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.405 [2024-12-07 17:27:33.480312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.405 [2024-12-07 17:27:33.480329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.405 [2024-12-07 17:27:33.480439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:00.662 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:00.662 17:27:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.662 17:27:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.662 [2024-12-07 17:27:33.978544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:00.662 [2024-12-07 17:27:33.979711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.662 [2024-12-07 17:27:33.979742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.663 [2024-12-07 17:27:33.979753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.663 [2024-12-07 17:27:33.979766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.663 [2024-12-07 17:27:33.979774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.663 [2024-12-07 17:27:33.979781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.663 [2024-12-07 17:27:33.979791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.663 [2024-12-07 17:27:33.979798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.663 [2024-12-07 17:27:33.979807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.663 [2024-12-07 17:27:33.979814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.663 [2024-12-07 17:27:33.979822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:00.663 [2024-12-07 17:27:33.979828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:00.663 17:27:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.663 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:00.663 17:27:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:01.227 17:27:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.227 17:27:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:01.227 17:27:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:01.227 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.485 17:27:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:11:13.739 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:13.739 17:27:46 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:13.739 17:27:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.292 17:27:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.292 17:27:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.292 17:27:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:20.292 17:27:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.292 [2024-12-07 17:27:52.938086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.292 [2024-12-07 17:27:52.939074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.292 [2024-12-07 17:27:52.939167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.292 [2024-12-07 17:27:52.939222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.292 [2024-12-07 17:27:52.939275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.292 [2024-12-07 17:27:52.939293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.292 [2024-12-07 17:27:52.939318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.292 [2024-12-07 17:27:52.939370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:52.939418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:52.939441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 [2024-12-07 17:27:52.939466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:52.939530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:52.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 [2024-12-07 17:27:53.338080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:20.293 [2024-12-07 17:27:53.339004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:53.339099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:53.339160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 [2024-12-07 17:27:53.339188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:53.339285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:53.339340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 [2024-12-07 17:27:53.339364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:53.339379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:53.339403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 [2024-12-07 17:27:53.339426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.293 [2024-12-07 17:27:53.339555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.293 [2024-12-07 17:27:53.339581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.293 17:27:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.293 17:27:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.293 17:27:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.293 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.551 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.551 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.551 17:27:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.762 17:28:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:32.762 17:28:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.762 [2024-12-07 17:28:05.838283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:32.762 [2024-12-07 17:28:05.839255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.762 [2024-12-07 17:28:05.839282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.762 [2024-12-07 17:28:05.839293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.762 [2024-12-07 17:28:05.839309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.762 [2024-12-07 17:28:05.839316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.762 [2024-12-07 17:28:05.839323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.762 [2024-12-07 17:28:05.839331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.762 [2024-12-07 17:28:05.839339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.762 [2024-12-07 17:28:05.839345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.762 [2024-12-07 17:28:05.839353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.762 [2024-12-07 17:28:05.839359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.762 [2024-12-07 17:28:05.839367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.019 [2024-12-07 17:28:06.238278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:33.019 [2024-12-07 17:28:06.239148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.019 [2024-12-07 17:28:06.239175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.019 [2024-12-07 17:28:06.239186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.019 [2024-12-07 17:28:06.239197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.019 [2024-12-07 17:28:06.239207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.019 [2024-12-07 17:28:06.239214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.019 [2024-12-07 17:28:06.239223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.019 [2024-12-07 17:28:06.239230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.019 [2024-12-07 17:28:06.239238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.019 [2024-12-07 17:28:06.239245] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:33.019 [2024-12-07 17:28:06.239252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:33.019 [2024-12-07 17:28:06.239259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.019 17:28:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.019 17:28:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.019 17:28:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:33.019 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:33.277 17:28:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.475 17:28:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.475 17:28:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.475 17:28:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.475 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.475 [2024-12-07 17:28:18.638512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:45.475 [2024-12-07 17:28:18.639861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.475 [2024-12-07 17:28:18.639968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.475 [2024-12-07 17:28:18.640055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.475 [2024-12-07 17:28:18.640133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.475 [2024-12-07 17:28:18.640216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.475 [2024-12-07 17:28:18.640246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.475 [2024-12-07 17:28:18.640294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.476 [2024-12-07 17:28:18.640319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.476 [2024-12-07 17:28:18.640346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.476 [2024-12-07 17:28:18.640396] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:45.476 [2024-12-07 17:28:18.640414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:45.476 [2024-12-07 17:28:18.640468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.476 17:28:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.476 17:28:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.476 17:28:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:45.476 17:28:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:46.041 [2024-12-07 17:28:19.138513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:46.042 [2024-12-07 17:28:19.139480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.042 [2024-12-07 17:28:19.139575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.042 [2024-12-07 17:28:19.139642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.042 [2024-12-07 17:28:19.139674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.042 [2024-12-07 17:28:19.139794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.042 [2024-12-07 17:28:19.139864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.042 [2024-12-07 17:28:19.139890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.042 [2024-12-07 17:28:19.139906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.042 [2024-12-07 17:28:19.139930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.042 [2024-12-07 17:28:19.140004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.042 [2024-12-07 17:28:19.140146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:46.042 [2024-12-07 17:28:19.140250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:46.042 17:28:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.042 17:28:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.042 17:28:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:46.042 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:46.300 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:46.300 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:46.300 17:28:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.68 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.68 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.68 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.68 2 00:11:58.492 remove_attach_helper took 44.68s to complete (handling 2 nvme drive(s)) 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:58.492 17:28:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67232 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67232 ']' 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67232 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67232 00:11:58.492 killing process with pid 67232 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67232' 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67232 00:11:58.492 17:28:31 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67232 00:11:59.430 17:28:32 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:59.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:00.264 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.264 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.264 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:00.523 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:00.523 00:12:00.523 real 2m30.336s 00:12:00.523 user 1m52.208s 00:12:00.523 sys 0m16.689s 00:12:00.523 17:28:33 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.523 ************************************ 00:12:00.523 END TEST sw_hotplug 00:12:00.523 ************************************ 00:12:00.523 17:28:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.523 17:28:33 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:00.523 17:28:33 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:00.523 17:28:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:00.523 17:28:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.523 17:28:33 -- common/autotest_common.sh@10 -- # set +x 00:12:00.523 ************************************ 00:12:00.523 START TEST nvme_xnvme 00:12:00.523 ************************************ 00:12:00.523 17:28:33 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:00.523 * Looking for test storage... 00:12:00.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.523 17:28:33 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.523 17:28:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.523 17:28:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.787 17:28:33 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.787 --rc genhtml_branch_coverage=1 00:12:00.787 --rc genhtml_function_coverage=1 00:12:00.787 --rc genhtml_legend=1 00:12:00.787 --rc geninfo_all_blocks=1 00:12:00.787 --rc geninfo_unexecuted_blocks=1 00:12:00.787 00:12:00.787 ' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.787 --rc genhtml_branch_coverage=1 00:12:00.787 --rc genhtml_function_coverage=1 00:12:00.787 --rc genhtml_legend=1 00:12:00.787 --rc geninfo_all_blocks=1 00:12:00.787 --rc geninfo_unexecuted_blocks=1 00:12:00.787 00:12:00.787 ' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.787 --rc genhtml_branch_coverage=1 00:12:00.787 --rc genhtml_function_coverage=1 00:12:00.787 --rc genhtml_legend=1 00:12:00.787 --rc geninfo_all_blocks=1 00:12:00.787 --rc geninfo_unexecuted_blocks=1 00:12:00.787 00:12:00.787 ' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.787 --rc genhtml_branch_coverage=1 00:12:00.787 --rc genhtml_function_coverage=1 00:12:00.787 --rc genhtml_legend=1 00:12:00.787 --rc geninfo_all_blocks=1 00:12:00.787 --rc geninfo_unexecuted_blocks=1 00:12:00.787 00:12:00.787 ' 00:12:00.787 17:28:33 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:00.787 17:28:33 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:00.787 17:28:33 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:00.787 17:28:33 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:00.788 17:28:33 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:00.788 17:28:33 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:00.788 17:28:33 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:00.788 #define SPDK_CONFIG_H 00:12:00.788 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:00.788 #define SPDK_CONFIG_APPS 1 00:12:00.788 #define SPDK_CONFIG_ARCH native 00:12:00.788 #define SPDK_CONFIG_ASAN 1 00:12:00.788 #undef SPDK_CONFIG_AVAHI 00:12:00.788 #undef SPDK_CONFIG_CET 00:12:00.788 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:00.788 #define SPDK_CONFIG_COVERAGE 1 00:12:00.788 #define SPDK_CONFIG_CROSS_PREFIX 00:12:00.788 #undef SPDK_CONFIG_CRYPTO 00:12:00.788 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:00.788 #undef SPDK_CONFIG_CUSTOMOCF 00:12:00.788 #undef SPDK_CONFIG_DAOS 00:12:00.788 #define SPDK_CONFIG_DAOS_DIR 00:12:00.788 #define SPDK_CONFIG_DEBUG 1 00:12:00.788 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:00.788 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:00.788 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:00.788 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:00.788 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:00.788 #undef SPDK_CONFIG_DPDK_UADK 00:12:00.788 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:00.788 #define SPDK_CONFIG_EXAMPLES 1 00:12:00.788 #undef SPDK_CONFIG_FC 00:12:00.788 #define SPDK_CONFIG_FC_PATH 00:12:00.788 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:00.788 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:00.788 #define SPDK_CONFIG_FSDEV 1 00:12:00.788 #undef SPDK_CONFIG_FUSE 00:12:00.788 #undef SPDK_CONFIG_FUZZER 00:12:00.788 #define SPDK_CONFIG_FUZZER_LIB 00:12:00.788 #undef SPDK_CONFIG_GOLANG 00:12:00.788 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:00.788 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:00.788 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:00.788 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:00.788 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:00.788 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:00.788 #undef SPDK_CONFIG_HAVE_LZ4 00:12:00.788 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:00.788 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:00.788 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:00.788 #define SPDK_CONFIG_IDXD 1 00:12:00.788 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:00.788 #undef SPDK_CONFIG_IPSEC_MB 00:12:00.788 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:00.788 #define SPDK_CONFIG_ISAL 1 00:12:00.788 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:00.788 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:00.788 #define SPDK_CONFIG_LIBDIR 00:12:00.788 #undef SPDK_CONFIG_LTO 00:12:00.788 #define SPDK_CONFIG_MAX_LCORES 128 00:12:00.788 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:00.788 #define SPDK_CONFIG_NVME_CUSE 1 00:12:00.788 #undef SPDK_CONFIG_OCF 00:12:00.788 #define SPDK_CONFIG_OCF_PATH 00:12:00.788 #define SPDK_CONFIG_OPENSSL_PATH 00:12:00.788 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:00.788 #define SPDK_CONFIG_PGO_DIR 00:12:00.788 #undef SPDK_CONFIG_PGO_USE 00:12:00.788 #define SPDK_CONFIG_PREFIX /usr/local 00:12:00.788 #undef SPDK_CONFIG_RAID5F 00:12:00.788 #undef SPDK_CONFIG_RBD 00:12:00.788 #define SPDK_CONFIG_RDMA 1 00:12:00.788 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:00.788 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:00.788 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:00.788 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:00.788 #define SPDK_CONFIG_SHARED 1 00:12:00.788 #undef SPDK_CONFIG_SMA 00:12:00.788 #define SPDK_CONFIG_TESTS 1 00:12:00.788 #undef SPDK_CONFIG_TSAN 00:12:00.788 #define SPDK_CONFIG_UBLK 1 00:12:00.788 #define SPDK_CONFIG_UBSAN 1 00:12:00.788 #undef SPDK_CONFIG_UNIT_TESTS 00:12:00.789 #undef SPDK_CONFIG_URING 00:12:00.789 #define SPDK_CONFIG_URING_PATH 00:12:00.789 #undef SPDK_CONFIG_URING_ZNS 00:12:00.789 #undef SPDK_CONFIG_USDT 00:12:00.789 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:00.789 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:00.789 #undef SPDK_CONFIG_VFIO_USER 00:12:00.789 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:00.789 #define SPDK_CONFIG_VHOST 1 00:12:00.789 #define SPDK_CONFIG_VIRTIO 1 00:12:00.789 #undef SPDK_CONFIG_VTUNE 00:12:00.789 #define SPDK_CONFIG_VTUNE_DIR 00:12:00.789 #define SPDK_CONFIG_WERROR 1 00:12:00.789 #define SPDK_CONFIG_WPDK_DIR 00:12:00.789 #define SPDK_CONFIG_XNVME 1 00:12:00.789 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:00.789 17:28:33 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.789 17:28:33 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.789 17:28:33 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.789 17:28:33 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.789 17:28:33 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.789 17:28:33 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.789 17:28:33 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.789 17:28:33 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.789 17:28:33 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:00.789 17:28:33 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:00.789 17:28:33 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:00.789 17:28:33 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68595 ]] 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68595 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:00.790 17:28:33 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.oe1vpc 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.oe1vpc/tests/xnvme /tmp/spdk.oe1vpc 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13960491008 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5607231488 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13960491008 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5607231488 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98849828864 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=852951040 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:00.791 * Looking for test storage... 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13960491008 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:00.791 17:28:34 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:00.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.792 --rc genhtml_branch_coverage=1 00:12:00.792 --rc genhtml_function_coverage=1 00:12:00.792 --rc genhtml_legend=1 00:12:00.792 --rc geninfo_all_blocks=1 00:12:00.792 --rc geninfo_unexecuted_blocks=1 00:12:00.792 00:12:00.792 ' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:00.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.792 --rc genhtml_branch_coverage=1 00:12:00.792 --rc genhtml_function_coverage=1 00:12:00.792 --rc genhtml_legend=1 00:12:00.792 --rc geninfo_all_blocks=1 00:12:00.792 --rc geninfo_unexecuted_blocks=1 00:12:00.792 00:12:00.792 ' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:00.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.792 --rc genhtml_branch_coverage=1 00:12:00.792 --rc genhtml_function_coverage=1 00:12:00.792 --rc genhtml_legend=1 00:12:00.792 --rc geninfo_all_blocks=1 00:12:00.792 --rc geninfo_unexecuted_blocks=1 00:12:00.792 00:12:00.792 ' 00:12:00.792 17:28:34 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:00.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.792 --rc genhtml_branch_coverage=1 00:12:00.792 --rc genhtml_function_coverage=1 00:12:00.792 --rc genhtml_legend=1 00:12:00.792 --rc geninfo_all_blocks=1 00:12:00.792 --rc geninfo_unexecuted_blocks=1 00:12:00.792 00:12:00.792 ' 00:12:00.792 17:28:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.792 17:28:34 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.792 17:28:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.792 17:28:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.792 17:28:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.792 17:28:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:00.792 17:28:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.792 17:28:34 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:00.792 17:28:34 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:00.792 17:28:34 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:00.792 17:28:34 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:00.793 17:28:34 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:01.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:01.316 Waiting for block devices as requested 00:12:01.316 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.576 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.576 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.576 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:06.866 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:06.866 17:28:39 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:07.128 17:28:40 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:07.128 17:28:40 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:07.390 17:28:40 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:07.390 No valid GPT data, bailing 00:12:07.390 17:28:40 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:07.390 17:28:40 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:07.390 17:28:40 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:07.390 17:28:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.390 17:28:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.390 17:28:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.390 ************************************ 00:12:07.390 START TEST xnvme_rpc 00:12:07.390 ************************************ 00:12:07.390 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:07.390 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:07.390 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:07.390 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68984 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68984 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68984 ']' 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.391 17:28:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.651 [2024-12-07 17:28:40.795469] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:07.651 [2024-12-07 17:28:40.795820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68984 ] 00:12:07.651 [2024-12-07 17:28:40.961321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.912 [2024-12-07 17:28:41.082123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.483 xnvme_bdev 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.483 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68984 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68984 ']' 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68984 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68984 00:12:08.745 killing process with pid 68984 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68984' 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68984 00:12:08.745 17:28:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68984 00:12:10.653 00:12:10.653 real 0m2.847s 00:12:10.653 user 0m2.839s 00:12:10.653 sys 0m0.469s 00:12:10.653 17:28:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.653 ************************************ 00:12:10.653 END TEST xnvme_rpc 00:12:10.653 ************************************ 00:12:10.653 17:28:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.653 17:28:43 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:10.653 17:28:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.653 17:28:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.653 17:28:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:10.653 ************************************ 00:12:10.653 START TEST xnvme_bdevperf 00:12:10.653 ************************************ 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:10.653 17:28:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:10.653 { 00:12:10.653 "subsystems": [ 00:12:10.653 { 00:12:10.653 "subsystem": "bdev", 00:12:10.653 "config": [ 00:12:10.653 { 00:12:10.653 "params": { 00:12:10.653 "io_mechanism": "libaio", 00:12:10.653 "conserve_cpu": false, 00:12:10.653 "filename": "/dev/nvme0n1", 00:12:10.653 "name": "xnvme_bdev" 00:12:10.653 }, 00:12:10.653 "method": "bdev_xnvme_create" 00:12:10.653 }, 00:12:10.653 { 00:12:10.653 "method": "bdev_wait_for_examine" 00:12:10.653 } 00:12:10.653 ] 00:12:10.653 } 00:12:10.653 ] 00:12:10.653 } 00:12:10.653 [2024-12-07 17:28:43.675684] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:10.653 [2024-12-07 17:28:43.675807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69053 ] 00:12:10.653 [2024-12-07 17:28:43.834893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.653 [2024-12-07 17:28:43.920545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.912 Running I/O for 5 seconds... 00:12:12.918 28172.00 IOPS, 110.05 MiB/s [2024-12-07T17:28:47.242Z] 29980.00 IOPS, 117.11 MiB/s [2024-12-07T17:28:48.182Z] 29760.00 IOPS, 116.25 MiB/s [2024-12-07T17:28:49.566Z] 30622.50 IOPS, 119.62 MiB/s [2024-12-07T17:28:49.566Z] 29656.00 IOPS, 115.84 MiB/s 00:12:16.184 Latency(us) 00:12:16.184 [2024-12-07T17:28:49.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.184 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:16.184 xnvme_bdev : 5.01 29632.37 115.75 0.00 0.00 2155.08 456.86 6956.90 00:12:16.184 [2024-12-07T17:28:49.566Z] =================================================================================================================== 00:12:16.184 [2024-12-07T17:28:49.566Z] Total : 29632.37 115.75 0.00 0.00 2155.08 456.86 6956.90 00:12:16.756 17:28:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:16.756 17:28:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:16.756 17:28:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:16.756 17:28:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:16.756 17:28:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:16.756 { 00:12:16.756 "subsystems": [ 00:12:16.756 { 00:12:16.756 "subsystem": "bdev", 00:12:16.756 "config": [ 00:12:16.756 { 00:12:16.756 "params": { 00:12:16.756 "io_mechanism": "libaio", 00:12:16.756 "conserve_cpu": false, 00:12:16.756 "filename": "/dev/nvme0n1", 00:12:16.756 "name": "xnvme_bdev" 00:12:16.756 }, 00:12:16.756 "method": "bdev_xnvme_create" 00:12:16.756 }, 00:12:16.756 { 00:12:16.756 "method": "bdev_wait_for_examine" 00:12:16.756 } 00:12:16.756 ] 00:12:16.756 } 00:12:16.756 ] 00:12:16.756 } 00:12:16.756 [2024-12-07 17:28:50.013394] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:16.756 [2024-12-07 17:28:50.013536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69137 ] 00:12:17.018 [2024-12-07 17:28:50.178948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.018 [2024-12-07 17:28:50.297622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.280 Running I/O for 5 seconds... 00:12:19.608 34919.00 IOPS, 136.40 MiB/s [2024-12-07T17:28:53.935Z] 34112.50 IOPS, 133.25 MiB/s [2024-12-07T17:28:54.877Z] 34333.00 IOPS, 134.11 MiB/s [2024-12-07T17:28:55.818Z] 33066.25 IOPS, 129.17 MiB/s 00:12:22.436 Latency(us) 00:12:22.436 [2024-12-07T17:28:55.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.436 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:22.436 xnvme_bdev : 5.00 33472.92 130.75 0.00 0.00 1907.47 419.05 7057.72 00:12:22.436 [2024-12-07T17:28:55.818Z] =================================================================================================================== 00:12:22.436 [2024-12-07T17:28:55.818Z] Total : 33472.92 130.75 0.00 0.00 1907.47 419.05 7057.72 00:12:23.378 00:12:23.378 real 0m12.812s 00:12:23.378 user 0m4.781s 00:12:23.378 sys 0m6.438s 00:12:23.378 17:28:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.378 ************************************ 00:12:23.378 17:28:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:23.378 END TEST xnvme_bdevperf 00:12:23.378 ************************************ 00:12:23.378 17:28:56 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:23.378 17:28:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.378 17:28:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.378 17:28:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.378 ************************************ 00:12:23.378 START TEST xnvme_fio_plugin 00:12:23.378 ************************************ 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:23.378 17:28:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:23.378 { 00:12:23.378 "subsystems": [ 00:12:23.378 { 00:12:23.378 "subsystem": "bdev", 00:12:23.378 "config": [ 00:12:23.378 { 00:12:23.378 "params": { 00:12:23.378 "io_mechanism": "libaio", 00:12:23.378 "conserve_cpu": false, 00:12:23.378 "filename": "/dev/nvme0n1", 00:12:23.378 "name": "xnvme_bdev" 00:12:23.378 }, 00:12:23.378 "method": "bdev_xnvme_create" 00:12:23.378 }, 00:12:23.378 { 00:12:23.378 "method": "bdev_wait_for_examine" 00:12:23.378 } 00:12:23.378 ] 00:12:23.378 } 00:12:23.378 ] 00:12:23.378 } 00:12:23.378 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:23.378 fio-3.35 00:12:23.378 Starting 1 thread 00:12:29.971 00:12:29.971 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69251: Sat Dec 7 17:29:02 2024 00:12:29.971 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5001msec) 00:12:29.971 slat (usec): min=4, max=1838, avg=19.15, stdev=92.84 00:12:29.972 clat (usec): min=106, max=13773, avg=1373.14, stdev=501.33 00:12:29.972 lat (usec): min=217, max=13778, avg=1392.29, stdev=492.16 00:12:29.972 clat percentiles (usec): 00:12:29.972 | 1.00th=[ 310], 5.00th=[ 611], 10.00th=[ 783], 20.00th=[ 971], 00:12:29.972 | 30.00th=[ 1123], 40.00th=[ 1237], 50.00th=[ 1352], 60.00th=[ 1467], 00:12:29.972 | 70.00th=[ 1582], 80.00th=[ 1729], 90.00th=[ 1942], 95.00th=[ 2180], 00:12:29.972 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 3851], 99.95th=[ 3982], 00:12:29.972 | 99.99th=[ 5342] 00:12:29.972 bw ( KiB/s): min=123712, max=140752, per=99.73%, avg=134106.67, stdev=5590.76, samples=9 00:12:29.972 iops : min=30928, max=35188, avg=33526.67, stdev=1397.69, samples=9 00:12:29.972 lat (usec) : 250=0.49%, 500=2.48%, 750=5.76%, 1000=13.06% 00:12:29.972 lat (msec) : 2=69.85%, 4=8.31%, 10=0.05%, 20=0.01% 00:12:29.972 cpu : usr=48.68%, sys=43.32%, ctx=15, majf=0, minf=764 00:12:29.972 IO depths : 1=0.6%, 2=1.5%, 4=3.4%, 8=8.3%, 16=22.3%, 32=61.7%, >=64=2.1% 00:12:29.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.972 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:29.972 issued rwts: total=168122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:29.972 00:12:29.972 Run status group 0 (all jobs): 00:12:29.972 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5001-5001msec 00:12:30.232 ----------------------------------------------------- 00:12:30.232 Suppressions used: 00:12:30.232 count bytes template 00:12:30.232 1 11 /usr/src/fio/parse.c 00:12:30.232 1 8 libtcmalloc_minimal.so 00:12:30.232 1 904 libcrypto.so 00:12:30.232 ----------------------------------------------------- 00:12:30.232 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:30.232 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:30.233 17:29:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:30.233 { 00:12:30.233 "subsystems": [ 00:12:30.233 { 00:12:30.233 "subsystem": "bdev", 00:12:30.233 "config": [ 00:12:30.233 { 00:12:30.233 "params": { 00:12:30.233 "io_mechanism": "libaio", 00:12:30.233 "conserve_cpu": false, 00:12:30.233 "filename": "/dev/nvme0n1", 00:12:30.233 "name": "xnvme_bdev" 00:12:30.233 }, 00:12:30.233 "method": "bdev_xnvme_create" 00:12:30.233 }, 00:12:30.233 { 00:12:30.233 "method": "bdev_wait_for_examine" 00:12:30.233 } 00:12:30.233 ] 00:12:30.233 } 00:12:30.233 ] 00:12:30.233 } 00:12:30.233 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:30.233 fio-3.35 00:12:30.233 Starting 1 thread 00:12:36.817 00:12:36.817 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69349: Sat Dec 7 17:29:09 2024 00:12:36.817 write: IOPS=36.7k, BW=143MiB/s (150MB/s)(717MiB/5001msec); 0 zone resets 00:12:36.817 slat (usec): min=4, max=1973, avg=17.22, stdev=77.70 00:12:36.817 clat (usec): min=72, max=7751, avg=1262.99, stdev=488.83 00:12:36.817 lat (usec): min=88, max=7757, avg=1280.20, stdev=482.23 00:12:36.817 clat percentiles (usec): 00:12:36.817 | 1.00th=[ 297], 5.00th=[ 537], 10.00th=[ 693], 20.00th=[ 865], 00:12:36.817 | 30.00th=[ 1004], 40.00th=[ 1123], 50.00th=[ 1221], 60.00th=[ 1352], 00:12:36.817 | 70.00th=[ 1467], 80.00th=[ 1631], 90.00th=[ 1827], 95.00th=[ 2057], 00:12:36.817 | 99.00th=[ 2737], 99.50th=[ 3097], 99.90th=[ 3916], 99.95th=[ 4228], 00:12:36.817 | 99.99th=[ 5407] 00:12:36.817 bw ( KiB/s): min=140384, max=150248, per=99.37%, avg=145987.44, stdev=3317.60, samples=9 00:12:36.817 iops : min=35096, max=37562, avg=36496.78, stdev=829.46, samples=9 00:12:36.817 lat (usec) : 100=0.01%, 250=0.55%, 500=3.59%, 750=8.65%, 1000=17.11% 00:12:36.817 lat (msec) : 2=64.16%, 4=5.86%, 10=0.09% 00:12:36.817 cpu : usr=50.74%, sys=40.28%, ctx=18, majf=0, minf=765 00:12:36.817 IO depths : 1=0.7%, 2=1.5%, 4=3.5%, 8=8.8%, 16=22.8%, 32=60.8%, >=64=2.1% 00:12:36.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.817 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:36.817 issued rwts: total=0,183678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:36.817 00:12:36.817 Run status group 0 (all jobs): 00:12:36.817 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=717MiB (752MB), run=5001-5001msec 00:12:37.080 ----------------------------------------------------- 00:12:37.080 Suppressions used: 00:12:37.080 count bytes template 00:12:37.080 1 11 /usr/src/fio/parse.c 00:12:37.080 1 8 libtcmalloc_minimal.so 00:12:37.080 1 904 libcrypto.so 00:12:37.080 ----------------------------------------------------- 00:12:37.080 00:12:37.080 ************************************ 00:12:37.080 END TEST xnvme_fio_plugin 00:12:37.080 ************************************ 00:12:37.080 00:12:37.080 real 0m13.824s 00:12:37.080 user 0m7.776s 00:12:37.080 sys 0m4.803s 00:12:37.080 17:29:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.080 17:29:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:37.080 17:29:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:37.080 17:29:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:37.080 17:29:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:37.080 17:29:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:37.080 17:29:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:37.080 17:29:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.080 17:29:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.080 ************************************ 00:12:37.080 START TEST xnvme_rpc 00:12:37.080 ************************************ 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69430 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69430 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69430 ']' 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.080 17:29:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:37.342 [2024-12-07 17:29:10.467455] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:37.342 [2024-12-07 17:29:10.467604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69430 ] 00:12:37.342 [2024-12-07 17:29:10.629716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.604 [2024-12-07 17:29:10.752454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.178 xnvme_bdev 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.178 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.440 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69430 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69430 ']' 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69430 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69430 00:12:38.441 killing process with pid 69430 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69430' 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69430 00:12:38.441 17:29:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69430 00:12:39.830 00:12:39.830 real 0m2.760s 00:12:39.830 user 0m2.775s 00:12:39.830 sys 0m0.444s 00:12:39.830 ************************************ 00:12:39.830 END TEST xnvme_rpc 00:12:39.830 ************************************ 00:12:39.830 17:29:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.830 17:29:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.830 17:29:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:39.830 17:29:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.830 17:29:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.830 17:29:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:39.830 ************************************ 00:12:39.830 START TEST xnvme_bdevperf 00:12:39.830 ************************************ 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:39.830 17:29:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.091 { 00:12:40.091 "subsystems": [ 00:12:40.091 { 00:12:40.091 "subsystem": "bdev", 00:12:40.091 "config": [ 00:12:40.091 { 00:12:40.091 "params": { 00:12:40.091 "io_mechanism": "libaio", 00:12:40.091 "conserve_cpu": true, 00:12:40.091 "filename": "/dev/nvme0n1", 00:12:40.091 "name": "xnvme_bdev" 00:12:40.091 }, 00:12:40.091 "method": "bdev_xnvme_create" 00:12:40.091 }, 00:12:40.091 { 00:12:40.091 "method": "bdev_wait_for_examine" 00:12:40.091 } 00:12:40.091 ] 00:12:40.091 } 00:12:40.091 ] 00:12:40.091 } 00:12:40.091 [2024-12-07 17:29:13.262177] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:40.091 [2024-12-07 17:29:13.262285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69498 ] 00:12:40.091 [2024-12-07 17:29:13.419666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.353 [2024-12-07 17:29:13.513667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.614 Running I/O for 5 seconds... 00:12:42.501 36733.00 IOPS, 143.49 MiB/s [2024-12-07T17:29:16.826Z] 37648.50 IOPS, 147.06 MiB/s [2024-12-07T17:29:18.212Z] 35218.67 IOPS, 137.57 MiB/s [2024-12-07T17:29:18.793Z] 34337.00 IOPS, 134.13 MiB/s [2024-12-07T17:29:18.793Z] 34253.00 IOPS, 133.80 MiB/s 00:12:45.411 Latency(us) 00:12:45.411 [2024-12-07T17:29:18.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.411 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:45.411 xnvme_bdev : 5.00 34244.08 133.77 0.00 0.00 1864.59 340.28 6956.90 00:12:45.411 [2024-12-07T17:29:18.793Z] =================================================================================================================== 00:12:45.411 [2024-12-07T17:29:18.793Z] Total : 34244.08 133.77 0.00 0.00 1864.59 340.28 6956.90 00:12:46.398 17:29:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:46.398 17:29:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:46.398 17:29:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:46.398 17:29:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:46.398 17:29:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:46.398 { 00:12:46.398 "subsystems": [ 00:12:46.398 { 00:12:46.398 "subsystem": "bdev", 00:12:46.398 "config": [ 00:12:46.398 { 00:12:46.398 "params": { 00:12:46.398 "io_mechanism": "libaio", 00:12:46.398 "conserve_cpu": true, 00:12:46.398 "filename": "/dev/nvme0n1", 00:12:46.398 "name": "xnvme_bdev" 00:12:46.398 }, 00:12:46.398 "method": "bdev_xnvme_create" 00:12:46.398 }, 00:12:46.398 { 00:12:46.398 "method": "bdev_wait_for_examine" 00:12:46.398 } 00:12:46.398 ] 00:12:46.398 } 00:12:46.398 ] 00:12:46.398 } 00:12:46.398 [2024-12-07 17:29:19.668521] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:12:46.399 [2024-12-07 17:29:19.668664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69579 ] 00:12:46.660 [2024-12-07 17:29:19.832295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.660 [2024-12-07 17:29:19.949869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.921 Running I/O for 5 seconds... 00:12:48.881 33827.00 IOPS, 132.14 MiB/s [2024-12-07T17:29:23.651Z] 33511.00 IOPS, 130.90 MiB/s [2024-12-07T17:29:24.597Z] 33589.00 IOPS, 131.21 MiB/s [2024-12-07T17:29:25.541Z] 33409.50 IOPS, 130.51 MiB/s [2024-12-07T17:29:25.541Z] 32884.40 IOPS, 128.45 MiB/s 00:12:52.159 Latency(us) 00:12:52.159 [2024-12-07T17:29:25.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.159 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:52.159 xnvme_bdev : 5.01 32846.89 128.31 0.00 0.00 1944.07 460.01 6654.42 00:12:52.159 [2024-12-07T17:29:25.541Z] =================================================================================================================== 00:12:52.160 [2024-12-07T17:29:25.542Z] Total : 32846.89 128.31 0.00 0.00 1944.07 460.01 6654.42 00:12:52.740 00:12:52.740 real 0m12.875s 00:12:52.740 user 0m4.978s 00:12:52.740 sys 0m6.205s 00:12:52.740 17:29:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.740 17:29:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:52.740 ************************************ 00:12:52.740 END TEST xnvme_bdevperf 00:12:52.740 ************************************ 00:12:53.001 17:29:26 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:53.001 17:29:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:53.001 17:29:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.001 17:29:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:53.001 ************************************ 00:12:53.001 START TEST xnvme_fio_plugin 00:12:53.001 ************************************ 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:53.001 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:53.002 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:53.002 17:29:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:53.002 { 00:12:53.002 "subsystems": [ 00:12:53.002 { 00:12:53.002 "subsystem": "bdev", 00:12:53.002 "config": [ 00:12:53.002 { 00:12:53.002 "params": { 00:12:53.002 "io_mechanism": "libaio", 00:12:53.002 "conserve_cpu": true, 00:12:53.002 "filename": "/dev/nvme0n1", 00:12:53.002 "name": "xnvme_bdev" 00:12:53.002 }, 00:12:53.002 "method": "bdev_xnvme_create" 00:12:53.002 }, 00:12:53.002 { 00:12:53.002 "method": "bdev_wait_for_examine" 00:12:53.002 } 00:12:53.002 ] 00:12:53.002 } 00:12:53.002 ] 00:12:53.002 } 00:12:53.002 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:53.002 fio-3.35 00:12:53.002 Starting 1 thread 00:12:59.588 00:12:59.588 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69693: Sat Dec 7 17:29:31 2024 00:12:59.588 read: IOPS=34.8k, BW=136MiB/s (143MB/s)(681MiB/5001msec) 00:12:59.588 slat (usec): min=4, max=2126, avg=19.87, stdev=86.20 00:12:59.588 clat (usec): min=105, max=4368, avg=1284.53, stdev=507.73 00:12:59.588 lat (usec): min=182, max=4825, avg=1304.40, stdev=499.59 00:12:59.588 clat percentiles (usec): 00:12:59.588 | 1.00th=[ 258], 5.00th=[ 506], 10.00th=[ 652], 20.00th=[ 857], 00:12:59.588 | 30.00th=[ 1004], 40.00th=[ 1139], 50.00th=[ 1270], 60.00th=[ 1401], 00:12:59.588 | 70.00th=[ 1532], 80.00th=[ 1680], 90.00th=[ 1876], 95.00th=[ 2114], 00:12:59.588 | 99.00th=[ 2802], 99.50th=[ 3130], 99.90th=[ 3589], 99.95th=[ 3785], 00:12:59.588 | 99.99th=[ 4080] 00:12:59.588 bw ( KiB/s): min=124168, max=148288, per=100.00%, avg=139599.89, stdev=7819.02, samples=9 00:12:59.588 iops : min=31042, max=37072, avg=34899.89, stdev=1954.84, samples=9 00:12:59.588 lat (usec) : 250=0.89%, 500=3.98%, 750=9.34%, 1000=15.29% 00:12:59.588 lat (msec) : 2=63.76%, 4=6.73%, 10=0.02% 00:12:59.588 cpu : usr=44.78%, sys=47.46%, ctx=9, majf=0, minf=764 00:12:59.588 IO depths : 1=0.6%, 2=1.3%, 4=3.3%, 8=8.6%, 16=23.0%, 32=61.2%, >=64=2.1% 00:12:59.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.588 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:59.588 issued rwts: total=174223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:59.588 00:12:59.588 Run status group 0 (all jobs): 00:12:59.588 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=681MiB (714MB), run=5001-5001msec 00:12:59.850 ----------------------------------------------------- 00:12:59.850 Suppressions used: 00:12:59.850 count bytes template 00:12:59.850 1 11 /usr/src/fio/parse.c 00:12:59.850 1 8 libtcmalloc_minimal.so 00:12:59.850 1 904 libcrypto.so 00:12:59.850 ----------------------------------------------------- 00:12:59.850 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:59.850 17:29:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:59.850 { 00:12:59.850 "subsystems": [ 00:12:59.850 { 00:12:59.850 "subsystem": "bdev", 00:12:59.850 "config": [ 00:12:59.850 { 00:12:59.850 "params": { 00:12:59.850 "io_mechanism": "libaio", 00:12:59.850 "conserve_cpu": true, 00:12:59.850 "filename": "/dev/nvme0n1", 00:12:59.850 "name": "xnvme_bdev" 00:12:59.850 }, 00:12:59.850 "method": "bdev_xnvme_create" 00:12:59.850 }, 00:12:59.850 { 00:12:59.850 "method": "bdev_wait_for_examine" 00:12:59.850 } 00:12:59.850 ] 00:12:59.850 } 00:12:59.850 ] 00:12:59.850 } 00:12:59.850 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:59.850 fio-3.35 00:12:59.850 Starting 1 thread 00:13:06.439 00:13:06.439 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69784: Sat Dec 7 17:29:38 2024 00:13:06.439 write: IOPS=34.2k, BW=134MiB/s (140MB/s)(668MiB/5001msec); 0 zone resets 00:13:06.439 slat (usec): min=4, max=1859, avg=23.50, stdev=84.30 00:13:06.439 clat (usec): min=90, max=4654, avg=1228.48, stdev=562.05 00:13:06.439 lat (usec): min=173, max=4755, avg=1251.98, stdev=556.46 00:13:06.439 clat percentiles (usec): 00:13:06.439 | 1.00th=[ 241], 5.00th=[ 408], 10.00th=[ 553], 20.00th=[ 734], 00:13:06.439 | 30.00th=[ 889], 40.00th=[ 1037], 50.00th=[ 1188], 60.00th=[ 1336], 00:13:06.439 | 70.00th=[ 1483], 80.00th=[ 1663], 90.00th=[ 1942], 95.00th=[ 2212], 00:13:06.439 | 99.00th=[ 2868], 99.50th=[ 3195], 99.90th=[ 3687], 99.95th=[ 3916], 00:13:06.439 | 99.99th=[ 4293] 00:13:06.439 bw ( KiB/s): min=125664, max=149720, per=100.00%, avg=137048.56, stdev=9323.57, samples=9 00:13:06.439 iops : min=31416, max=37430, avg=34262.11, stdev=2330.87, samples=9 00:13:06.439 lat (usec) : 100=0.01%, 250=1.14%, 500=6.67%, 750=12.95%, 1000=16.76% 00:13:06.439 lat (msec) : 2=54.11%, 4=8.33%, 10=0.03% 00:13:06.439 cpu : usr=33.40%, sys=57.34%, ctx=9, majf=0, minf=765 00:13:06.439 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.4%, 16=24.2%, 32=61.4%, >=64=2.0% 00:13:06.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.439 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:06.439 issued rwts: total=0,171113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:06.439 00:13:06.439 Run status group 0 (all jobs): 00:13:06.439 WRITE: bw=134MiB/s (140MB/s), 134MiB/s-134MiB/s (140MB/s-140MB/s), io=668MiB (701MB), run=5001-5001msec 00:13:06.701 ----------------------------------------------------- 00:13:06.701 Suppressions used: 00:13:06.701 count bytes template 00:13:06.701 1 11 /usr/src/fio/parse.c 00:13:06.701 1 8 libtcmalloc_minimal.so 00:13:06.701 1 904 libcrypto.so 00:13:06.701 ----------------------------------------------------- 00:13:06.701 00:13:06.701 00:13:06.701 real 0m13.742s 00:13:06.701 user 0m6.626s 00:13:06.701 sys 0m5.863s 00:13:06.701 17:29:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.701 17:29:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:06.701 ************************************ 00:13:06.701 END TEST xnvme_fio_plugin 00:13:06.701 ************************************ 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:06.701 17:29:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:06.701 17:29:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:06.701 17:29:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.701 17:29:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:06.701 ************************************ 00:13:06.701 START TEST xnvme_rpc 00:13:06.701 ************************************ 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69865 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69865 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69865 ']' 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.701 17:29:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:06.701 [2024-12-07 17:29:40.034069] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:06.701 [2024-12-07 17:29:40.034210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69865 ] 00:13:06.962 [2024-12-07 17:29:40.199652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.962 [2024-12-07 17:29:40.319565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 xnvme_bdev 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:07.906 17:29:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69865 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69865 ']' 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69865 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:07.906 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69865 00:13:07.907 killing process with pid 69865 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69865' 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69865 00:13:07.907 17:29:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69865 00:13:09.291 00:13:09.292 real 0m2.672s 00:13:09.292 user 0m2.750s 00:13:09.292 sys 0m0.384s 00:13:09.292 17:29:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.292 17:29:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.292 ************************************ 00:13:09.292 END TEST xnvme_rpc 00:13:09.292 ************************************ 00:13:09.292 17:29:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:09.292 17:29:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.292 17:29:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.292 17:29:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.553 ************************************ 00:13:09.553 START TEST xnvme_bdevperf 00:13:09.553 ************************************ 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:09.553 17:29:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:09.553 { 00:13:09.553 "subsystems": [ 00:13:09.553 { 00:13:09.553 "subsystem": "bdev", 00:13:09.553 "config": [ 00:13:09.553 { 00:13:09.553 "params": { 00:13:09.553 "io_mechanism": "io_uring", 00:13:09.553 "conserve_cpu": false, 00:13:09.553 "filename": "/dev/nvme0n1", 00:13:09.553 "name": "xnvme_bdev" 00:13:09.553 }, 00:13:09.553 "method": "bdev_xnvme_create" 00:13:09.553 }, 00:13:09.553 { 00:13:09.553 "method": "bdev_wait_for_examine" 00:13:09.553 } 00:13:09.553 ] 00:13:09.553 } 00:13:09.553 ] 00:13:09.553 } 00:13:09.553 [2024-12-07 17:29:42.741530] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:09.553 [2024-12-07 17:29:42.741648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69939 ] 00:13:09.553 [2024-12-07 17:29:42.902890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.814 [2024-12-07 17:29:42.996707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.076 Running I/O for 5 seconds... 00:13:11.959 33844.00 IOPS, 132.20 MiB/s [2024-12-07T17:29:46.285Z] 33600.50 IOPS, 131.25 MiB/s [2024-12-07T17:29:47.670Z] 34012.33 IOPS, 132.86 MiB/s [2024-12-07T17:29:48.610Z] 34358.25 IOPS, 134.21 MiB/s 00:13:15.228 Latency(us) 00:13:15.228 [2024-12-07T17:29:48.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.228 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:15.228 xnvme_bdev : 5.00 34485.10 134.71 0.00 0.00 1852.28 403.30 14417.92 00:13:15.228 [2024-12-07T17:29:48.610Z] =================================================================================================================== 00:13:15.228 [2024-12-07T17:29:48.610Z] Total : 34485.10 134.71 0.00 0.00 1852.28 403.30 14417.92 00:13:15.800 17:29:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:15.800 17:29:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:15.800 17:29:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:15.800 17:29:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:15.800 17:29:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:15.800 { 00:13:15.800 "subsystems": [ 00:13:15.800 { 00:13:15.800 "subsystem": "bdev", 00:13:15.800 "config": [ 00:13:15.800 { 00:13:15.800 "params": { 00:13:15.800 "io_mechanism": "io_uring", 00:13:15.800 "conserve_cpu": false, 00:13:15.800 "filename": "/dev/nvme0n1", 00:13:15.800 "name": "xnvme_bdev" 00:13:15.800 }, 00:13:15.800 "method": "bdev_xnvme_create" 00:13:15.800 }, 00:13:15.800 { 00:13:15.800 "method": "bdev_wait_for_examine" 00:13:15.800 } 00:13:15.800 ] 00:13:15.800 } 00:13:15.800 ] 00:13:15.800 } 00:13:15.800 [2024-12-07 17:29:49.029054] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:15.800 [2024-12-07 17:29:49.029167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:13:16.059 [2024-12-07 17:29:49.188682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.060 [2024-12-07 17:29:49.285753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.321 Running I/O for 5 seconds... 00:13:18.201 36221.00 IOPS, 141.49 MiB/s [2024-12-07T17:29:52.966Z] 34958.50 IOPS, 136.56 MiB/s [2024-12-07T17:29:53.907Z] 34738.67 IOPS, 135.70 MiB/s [2024-12-07T17:29:54.935Z] 34573.00 IOPS, 135.05 MiB/s [2024-12-07T17:29:54.935Z] 34417.60 IOPS, 134.44 MiB/s 00:13:21.553 Latency(us) 00:13:21.553 [2024-12-07T17:29:54.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.553 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:21.553 xnvme_bdev : 5.00 34404.86 134.39 0.00 0.00 1856.41 201.65 8318.03 00:13:21.553 [2024-12-07T17:29:54.935Z] =================================================================================================================== 00:13:21.553 [2024-12-07T17:29:54.935Z] Total : 34404.86 134.39 0.00 0.00 1856.41 201.65 8318.03 00:13:22.191 00:13:22.191 real 0m12.651s 00:13:22.191 user 0m6.167s 00:13:22.191 sys 0m6.233s 00:13:22.191 ************************************ 00:13:22.191 END TEST xnvme_bdevperf 00:13:22.191 ************************************ 00:13:22.191 17:29:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.191 17:29:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.191 17:29:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:22.191 17:29:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.191 17:29:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.191 17:29:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.191 ************************************ 00:13:22.191 START TEST xnvme_fio_plugin 00:13:22.191 ************************************ 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.191 17:29:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:22.191 { 00:13:22.191 "subsystems": [ 00:13:22.191 { 00:13:22.191 "subsystem": "bdev", 00:13:22.191 "config": [ 00:13:22.191 { 00:13:22.191 "params": { 00:13:22.191 "io_mechanism": "io_uring", 00:13:22.191 "conserve_cpu": false, 00:13:22.191 "filename": "/dev/nvme0n1", 00:13:22.191 "name": "xnvme_bdev" 00:13:22.191 }, 00:13:22.191 "method": "bdev_xnvme_create" 00:13:22.191 }, 00:13:22.191 { 00:13:22.191 "method": "bdev_wait_for_examine" 00:13:22.191 } 00:13:22.191 ] 00:13:22.191 } 00:13:22.191 ] 00:13:22.191 } 00:13:22.452 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:22.452 fio-3.35 00:13:22.452 Starting 1 thread 00:13:29.038 00:13:29.038 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70128: Sat Dec 7 17:30:01 2024 00:13:29.038 read: IOPS=31.5k, BW=123MiB/s (129MB/s)(616MiB/5001msec) 00:13:29.038 slat (usec): min=2, max=589, avg= 3.44, stdev= 2.29 00:13:29.038 clat (usec): min=1043, max=4165, avg=1889.96, stdev=311.66 00:13:29.038 lat (usec): min=1046, max=4178, avg=1893.40, stdev=311.93 00:13:29.038 clat percentiles (usec): 00:13:29.038 | 1.00th=[ 1270], 5.00th=[ 1434], 10.00th=[ 1516], 20.00th=[ 1631], 00:13:29.038 | 30.00th=[ 1713], 40.00th=[ 1795], 50.00th=[ 1876], 60.00th=[ 1942], 00:13:29.038 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2311], 95.00th=[ 2442], 00:13:29.038 | 99.00th=[ 2737], 99.50th=[ 2868], 99.90th=[ 3163], 99.95th=[ 3654], 00:13:29.038 | 99.99th=[ 4113] 00:13:29.038 bw ( KiB/s): min=121856, max=129536, per=100.00%, avg=126264.89, stdev=2665.90, samples=9 00:13:29.038 iops : min=30464, max=32384, avg=31566.22, stdev=666.47, samples=9 00:13:29.038 lat (msec) : 2=67.07%, 4=32.89%, 10=0.03% 00:13:29.038 cpu : usr=29.36%, sys=69.54%, ctx=16, majf=0, minf=762 00:13:29.038 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:29.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.038 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:29.038 issued rwts: total=157696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.038 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.038 00:13:29.038 Run status group 0 (all jobs): 00:13:29.038 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=616MiB (646MB), run=5001-5001msec 00:13:29.038 ----------------------------------------------------- 00:13:29.038 Suppressions used: 00:13:29.038 count bytes template 00:13:29.038 1 11 /usr/src/fio/parse.c 00:13:29.038 1 8 libtcmalloc_minimal.so 00:13:29.038 1 904 libcrypto.so 00:13:29.038 ----------------------------------------------------- 00:13:29.038 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:29.038 17:30:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:29.038 { 00:13:29.038 "subsystems": [ 00:13:29.038 { 00:13:29.038 "subsystem": "bdev", 00:13:29.038 "config": [ 00:13:29.038 { 00:13:29.038 "params": { 00:13:29.038 "io_mechanism": "io_uring", 00:13:29.038 "conserve_cpu": false, 00:13:29.038 "filename": "/dev/nvme0n1", 00:13:29.038 "name": "xnvme_bdev" 00:13:29.038 }, 00:13:29.038 "method": "bdev_xnvme_create" 00:13:29.038 }, 00:13:29.038 { 00:13:29.038 "method": "bdev_wait_for_examine" 00:13:29.038 } 00:13:29.038 ] 00:13:29.038 } 00:13:29.038 ] 00:13:29.038 } 00:13:29.300 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:29.300 fio-3.35 00:13:29.300 Starting 1 thread 00:13:35.884 00:13:35.884 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70221: Sat Dec 7 17:30:08 2024 00:13:35.884 write: IOPS=34.0k, BW=133MiB/s (139MB/s)(665MiB/5001msec); 0 zone resets 00:13:35.885 slat (nsec): min=2891, max=77911, avg=3470.59, stdev=1732.79 00:13:35.885 clat (usec): min=343, max=6903, avg=1741.35, stdev=306.77 00:13:35.885 lat (usec): min=346, max=6906, avg=1744.82, stdev=307.03 00:13:35.885 clat percentiles (usec): 00:13:35.885 | 1.00th=[ 1156], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1483], 00:13:35.885 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1795], 00:13:35.885 | 70.00th=[ 1876], 80.00th=[ 1991], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:35.885 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3097], 99.95th=[ 3654], 00:13:35.885 | 99.99th=[ 5080] 00:13:35.885 bw ( KiB/s): min=123192, max=142104, per=100.00%, avg=136938.67, stdev=6610.44, samples=9 00:13:35.885 iops : min=30798, max=35526, avg=34234.67, stdev=1652.61, samples=9 00:13:35.885 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.10% 00:13:35.885 lat (msec) : 2=81.23%, 4=18.62%, 10=0.02% 00:13:35.885 cpu : usr=31.56%, sys=67.46%, ctx=11, majf=0, minf=763 00:13:35.885 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:35.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.885 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:35.885 issued rwts: total=0,170256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:35.885 00:13:35.885 Run status group 0 (all jobs): 00:13:35.885 WRITE: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=665MiB (697MB), run=5001-5001msec 00:13:35.885 ----------------------------------------------------- 00:13:35.885 Suppressions used: 00:13:35.885 count bytes template 00:13:35.885 1 11 /usr/src/fio/parse.c 00:13:35.885 1 8 libtcmalloc_minimal.so 00:13:35.885 1 904 libcrypto.so 00:13:35.885 ----------------------------------------------------- 00:13:35.885 00:13:35.885 00:13:35.885 real 0m13.794s 00:13:35.885 user 0m5.975s 00:13:35.885 sys 0m7.388s 00:13:35.885 ************************************ 00:13:35.885 END TEST xnvme_fio_plugin 00:13:35.885 ************************************ 00:13:35.885 17:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.885 17:30:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:35.885 17:30:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:35.885 17:30:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:35.885 17:30:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:35.885 17:30:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:35.885 17:30:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.885 17:30:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.885 17:30:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.885 ************************************ 00:13:35.885 START TEST xnvme_rpc 00:13:35.885 ************************************ 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:35.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70307 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70307 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70307 ']' 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.885 17:30:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:36.146 [2024-12-07 17:30:09.346594] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:36.146 [2024-12-07 17:30:09.346745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70307 ] 00:13:36.146 [2024-12-07 17:30:09.511274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.407 [2024-12-07 17:30:09.634631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.980 xnvme_bdev 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:36.980 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:36.981 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70307 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70307 ']' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70307 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70307 00:13:37.243 killing process with pid 70307 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70307' 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70307 00:13:37.243 17:30:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70307 00:13:39.162 00:13:39.162 real 0m2.893s 00:13:39.162 user 0m2.934s 00:13:39.162 sys 0m0.446s 00:13:39.162 ************************************ 00:13:39.162 END TEST xnvme_rpc 00:13:39.162 ************************************ 00:13:39.162 17:30:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.162 17:30:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 17:30:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:39.162 17:30:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:39.162 17:30:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.162 17:30:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 ************************************ 00:13:39.162 START TEST xnvme_bdevperf 00:13:39.162 ************************************ 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:39.162 17:30:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:39.162 { 00:13:39.162 "subsystems": [ 00:13:39.162 { 00:13:39.162 "subsystem": "bdev", 00:13:39.162 "config": [ 00:13:39.162 { 00:13:39.162 "params": { 00:13:39.162 "io_mechanism": "io_uring", 00:13:39.162 "conserve_cpu": true, 00:13:39.162 "filename": "/dev/nvme0n1", 00:13:39.162 "name": "xnvme_bdev" 00:13:39.162 }, 00:13:39.162 "method": "bdev_xnvme_create" 00:13:39.162 }, 00:13:39.162 { 00:13:39.162 "method": "bdev_wait_for_examine" 00:13:39.162 } 00:13:39.162 ] 00:13:39.162 } 00:13:39.162 ] 00:13:39.162 } 00:13:39.162 [2024-12-07 17:30:12.290170] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:39.162 [2024-12-07 17:30:12.290281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70381 ] 00:13:39.162 [2024-12-07 17:30:12.450111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.424 [2024-12-07 17:30:12.544642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.424 Running I/O for 5 seconds... 00:13:41.755 38180.00 IOPS, 149.14 MiB/s [2024-12-07T17:30:16.077Z] 38050.00 IOPS, 148.63 MiB/s [2024-12-07T17:30:17.033Z] 38089.67 IOPS, 148.79 MiB/s [2024-12-07T17:30:17.977Z] 38188.00 IOPS, 149.17 MiB/s [2024-12-07T17:30:17.977Z] 38115.40 IOPS, 148.89 MiB/s 00:13:44.595 Latency(us) 00:13:44.595 [2024-12-07T17:30:17.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.595 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:44.595 xnvme_bdev : 5.00 38112.80 148.88 0.00 0.00 1675.73 661.66 9225.45 00:13:44.595 [2024-12-07T17:30:17.977Z] =================================================================================================================== 00:13:44.595 [2024-12-07T17:30:17.977Z] Total : 38112.80 148.88 0.00 0.00 1675.73 661.66 9225.45 00:13:45.168 17:30:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.168 17:30:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:45.168 17:30:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:45.168 17:30:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.168 17:30:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.168 { 00:13:45.168 "subsystems": [ 00:13:45.168 { 00:13:45.168 "subsystem": "bdev", 00:13:45.168 "config": [ 00:13:45.168 { 00:13:45.168 "params": { 00:13:45.168 "io_mechanism": "io_uring", 00:13:45.168 "conserve_cpu": true, 00:13:45.168 "filename": "/dev/nvme0n1", 00:13:45.168 "name": "xnvme_bdev" 00:13:45.168 }, 00:13:45.168 "method": "bdev_xnvme_create" 00:13:45.168 }, 00:13:45.168 { 00:13:45.168 "method": "bdev_wait_for_examine" 00:13:45.168 } 00:13:45.168 ] 00:13:45.168 } 00:13:45.168 ] 00:13:45.168 } 00:13:45.429 [2024-12-07 17:30:18.567939] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:13:45.429 [2024-12-07 17:30:18.568181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70456 ] 00:13:45.429 [2024-12-07 17:30:18.727881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.690 [2024-12-07 17:30:18.821277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.690 Running I/O for 5 seconds... 00:13:48.017 37959.00 IOPS, 148.28 MiB/s [2024-12-07T17:30:22.346Z] 37199.50 IOPS, 145.31 MiB/s [2024-12-07T17:30:23.287Z] 36321.67 IOPS, 141.88 MiB/s [2024-12-07T17:30:24.272Z] 36871.50 IOPS, 144.03 MiB/s [2024-12-07T17:30:24.272Z] 37034.80 IOPS, 144.67 MiB/s 00:13:50.890 Latency(us) 00:13:50.890 [2024-12-07T17:30:24.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.890 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:50.890 xnvme_bdev : 5.00 37029.69 144.65 0.00 0.00 1724.59 661.66 5167.26 00:13:50.890 [2024-12-07T17:30:24.272Z] =================================================================================================================== 00:13:50.890 [2024-12-07T17:30:24.272Z] Total : 37029.69 144.65 0.00 0.00 1724.59 661.66 5167.26 00:13:51.828 00:13:51.828 real 0m12.654s 00:13:51.828 user 0m9.735s 00:13:51.828 sys 0m2.440s 00:13:51.828 ************************************ 00:13:51.828 END TEST xnvme_bdevperf 00:13:51.828 ************************************ 00:13:51.828 17:30:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.828 17:30:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.828 17:30:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:51.828 17:30:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.828 17:30:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.828 17:30:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.828 ************************************ 00:13:51.828 START TEST xnvme_fio_plugin 00:13:51.828 ************************************ 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.828 17:30:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.828 { 00:13:51.828 "subsystems": [ 00:13:51.828 { 00:13:51.828 "subsystem": "bdev", 00:13:51.828 "config": [ 00:13:51.828 { 00:13:51.828 "params": { 00:13:51.828 "io_mechanism": "io_uring", 00:13:51.828 "conserve_cpu": true, 00:13:51.828 "filename": "/dev/nvme0n1", 00:13:51.828 "name": "xnvme_bdev" 00:13:51.828 }, 00:13:51.828 "method": "bdev_xnvme_create" 00:13:51.828 }, 00:13:51.828 { 00:13:51.828 "method": "bdev_wait_for_examine" 00:13:51.828 } 00:13:51.828 ] 00:13:51.828 } 00:13:51.828 ] 00:13:51.828 } 00:13:51.828 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.828 fio-3.35 00:13:51.828 Starting 1 thread 00:13:58.406 00:13:58.406 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70571: Sat Dec 7 17:30:30 2024 00:13:58.406 read: IOPS=36.3k, BW=142MiB/s (149MB/s)(709MiB/5002msec) 00:13:58.406 slat (nsec): min=2854, max=71485, avg=3244.07, stdev=1405.73 00:13:58.406 clat (usec): min=882, max=3022, avg=1635.89, stdev=292.95 00:13:58.406 lat (usec): min=885, max=3045, avg=1639.13, stdev=293.19 00:13:58.406 clat percentiles (usec): 00:13:58.406 | 1.00th=[ 1074], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1369], 00:13:58.406 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1631], 60.00th=[ 1696], 00:13:58.406 | 70.00th=[ 1778], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:13:58.406 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 2802], 00:13:58.406 | 99.99th=[ 2933] 00:13:58.406 bw ( KiB/s): min=129536, max=158720, per=100.00%, avg=146773.33, stdev=10558.25, samples=9 00:13:58.406 iops : min=32384, max=39680, avg=36693.33, stdev=2639.56, samples=9 00:13:58.406 lat (usec) : 1000=0.19% 00:13:58.406 lat (msec) : 2=88.92%, 4=10.89% 00:13:58.406 cpu : usr=69.15%, sys=27.87%, ctx=10, majf=0, minf=762 00:13:58.406 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:58.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.406 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:58.406 issued rwts: total=181440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.406 00:13:58.406 Run status group 0 (all jobs): 00:13:58.407 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=709MiB (743MB), run=5002-5002msec 00:13:58.407 ----------------------------------------------------- 00:13:58.407 Suppressions used: 00:13:58.407 count bytes template 00:13:58.407 1 11 /usr/src/fio/parse.c 00:13:58.407 1 8 libtcmalloc_minimal.so 00:13:58.407 1 904 libcrypto.so 00:13:58.407 ----------------------------------------------------- 00:13:58.407 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:58.407 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.668 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:58.668 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:58.668 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:58.668 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:58.668 17:30:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.668 { 00:13:58.668 "subsystems": [ 00:13:58.668 { 00:13:58.668 "subsystem": "bdev", 00:13:58.668 "config": [ 00:13:58.668 { 00:13:58.668 "params": { 00:13:58.668 "io_mechanism": "io_uring", 00:13:58.668 "conserve_cpu": true, 00:13:58.668 "filename": "/dev/nvme0n1", 00:13:58.668 "name": "xnvme_bdev" 00:13:58.668 }, 00:13:58.668 "method": "bdev_xnvme_create" 00:13:58.668 }, 00:13:58.668 { 00:13:58.668 "method": "bdev_wait_for_examine" 00:13:58.668 } 00:13:58.668 ] 00:13:58.668 } 00:13:58.668 ] 00:13:58.668 } 00:13:58.668 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:58.668 fio-3.35 00:13:58.668 Starting 1 thread 00:14:05.329 00:14:05.329 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70663: Sat Dec 7 17:30:37 2024 00:14:05.329 write: IOPS=34.6k, BW=135MiB/s (142MB/s)(676MiB/5002msec); 0 zone resets 00:14:05.329 slat (nsec): min=2898, max=76323, avg=3614.64, stdev=1660.71 00:14:05.329 clat (usec): min=744, max=5670, avg=1707.29, stdev=237.96 00:14:05.329 lat (usec): min=747, max=5673, avg=1710.90, stdev=238.23 00:14:05.329 clat percentiles (usec): 00:14:05.329 | 1.00th=[ 1254], 5.00th=[ 1369], 10.00th=[ 1434], 20.00th=[ 1516], 00:14:05.329 | 30.00th=[ 1582], 40.00th=[ 1631], 50.00th=[ 1680], 60.00th=[ 1745], 00:14:05.329 | 70.00th=[ 1811], 80.00th=[ 1876], 90.00th=[ 1991], 95.00th=[ 2114], 00:14:05.329 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 3064], 99.95th=[ 3326], 00:14:05.329 | 99.99th=[ 3982] 00:14:05.329 bw ( KiB/s): min=133632, max=142896, per=100.00%, avg=138516.44, stdev=2855.36, samples=9 00:14:05.329 iops : min=33408, max=35724, avg=34629.11, stdev=713.84, samples=9 00:14:05.329 lat (usec) : 750=0.01%, 1000=0.02% 00:14:05.329 lat (msec) : 2=90.38%, 4=9.59%, 10=0.01% 00:14:05.329 cpu : usr=70.45%, sys=26.31%, ctx=24, majf=0, minf=763 00:14:05.329 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:05.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.329 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:05.329 issued rwts: total=0,172983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.329 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.329 00:14:05.329 Run status group 0 (all jobs): 00:14:05.329 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=676MiB (709MB), run=5002-5002msec 00:14:05.329 ----------------------------------------------------- 00:14:05.329 Suppressions used: 00:14:05.329 count bytes template 00:14:05.329 1 11 /usr/src/fio/parse.c 00:14:05.329 1 8 libtcmalloc_minimal.so 00:14:05.329 1 904 libcrypto.so 00:14:05.329 ----------------------------------------------------- 00:14:05.329 00:14:05.591 ************************************ 00:14:05.591 00:14:05.591 real 0m13.765s 00:14:05.592 user 0m9.829s 00:14:05.592 sys 0m3.302s 00:14:05.592 17:30:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.592 17:30:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.592 END TEST xnvme_fio_plugin 00:14:05.592 ************************************ 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:05.592 17:30:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:05.592 17:30:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.592 17:30:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.592 17:30:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.592 ************************************ 00:14:05.592 START TEST xnvme_rpc 00:14:05.592 ************************************ 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:05.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70749 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70749 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70749 ']' 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.592 17:30:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.592 [2024-12-07 17:30:38.868811] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:05.592 [2024-12-07 17:30:38.868932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70749 ] 00:14:05.853 [2024-12-07 17:30:39.024611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.853 [2024-12-07 17:30:39.117679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 xnvme_bdev 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.427 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70749 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70749 ']' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70749 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70749 00:14:06.689 killing process with pid 70749 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70749' 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70749 00:14:06.689 17:30:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70749 00:14:08.076 00:14:08.076 real 0m2.658s 00:14:08.076 user 0m2.733s 00:14:08.076 sys 0m0.389s 00:14:08.076 17:30:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.076 ************************************ 00:14:08.076 END TEST xnvme_rpc 00:14:08.076 ************************************ 00:14:08.076 17:30:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.339 17:30:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:08.339 17:30:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:08.339 17:30:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.339 17:30:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.339 ************************************ 00:14:08.339 START TEST xnvme_bdevperf 00:14:08.339 ************************************ 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:08.339 17:30:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:08.339 { 00:14:08.339 "subsystems": [ 00:14:08.339 { 00:14:08.339 "subsystem": "bdev", 00:14:08.339 "config": [ 00:14:08.339 { 00:14:08.339 "params": { 00:14:08.339 "io_mechanism": "io_uring_cmd", 00:14:08.339 "conserve_cpu": false, 00:14:08.339 "filename": "/dev/ng0n1", 00:14:08.339 "name": "xnvme_bdev" 00:14:08.339 }, 00:14:08.339 "method": "bdev_xnvme_create" 00:14:08.339 }, 00:14:08.339 { 00:14:08.339 "method": "bdev_wait_for_examine" 00:14:08.339 } 00:14:08.339 ] 00:14:08.339 } 00:14:08.339 ] 00:14:08.339 } 00:14:08.339 [2024-12-07 17:30:41.586643] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:08.339 [2024-12-07 17:30:41.586954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:14:08.601 [2024-12-07 17:30:41.752212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.601 [2024-12-07 17:30:41.872669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.864 Running I/O for 5 seconds... 00:14:11.193 35328.00 IOPS, 138.00 MiB/s [2024-12-07T17:30:45.526Z] 36576.00 IOPS, 142.88 MiB/s [2024-12-07T17:30:46.466Z] 38336.00 IOPS, 149.75 MiB/s [2024-12-07T17:30:47.408Z] 38416.00 IOPS, 150.06 MiB/s [2024-12-07T17:30:47.408Z] 37670.40 IOPS, 147.15 MiB/s 00:14:14.026 Latency(us) 00:14:14.026 [2024-12-07T17:30:47.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.026 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:14.026 xnvme_bdev : 5.01 37644.64 147.05 0.00 0.00 1696.79 360.76 11090.71 00:14:14.026 [2024-12-07T17:30:47.408Z] =================================================================================================================== 00:14:14.026 [2024-12-07T17:30:47.408Z] Total : 37644.64 147.05 0.00 0.00 1696.79 360.76 11090.71 00:14:14.597 17:30:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.597 17:30:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:14.597 17:30:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.597 17:30:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.597 17:30:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.859 { 00:14:14.859 "subsystems": [ 00:14:14.859 { 00:14:14.859 "subsystem": "bdev", 00:14:14.859 "config": [ 00:14:14.859 { 00:14:14.859 "params": { 00:14:14.859 "io_mechanism": "io_uring_cmd", 00:14:14.859 "conserve_cpu": false, 00:14:14.859 "filename": "/dev/ng0n1", 00:14:14.859 "name": "xnvme_bdev" 00:14:14.859 }, 00:14:14.859 "method": "bdev_xnvme_create" 00:14:14.859 }, 00:14:14.859 { 00:14:14.859 "method": "bdev_wait_for_examine" 00:14:14.859 } 00:14:14.859 ] 00:14:14.859 } 00:14:14.859 ] 00:14:14.859 } 00:14:14.859 [2024-12-07 17:30:48.051021] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:14.859 [2024-12-07 17:30:48.051163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:14:14.859 [2024-12-07 17:30:48.217098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.120 [2024-12-07 17:30:48.336644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.382 Running I/O for 5 seconds... 00:14:17.271 38589.00 IOPS, 150.74 MiB/s [2024-12-07T17:30:52.037Z] 38389.00 IOPS, 149.96 MiB/s [2024-12-07T17:30:52.980Z] 37261.33 IOPS, 145.55 MiB/s [2024-12-07T17:30:53.925Z] 36632.00 IOPS, 143.09 MiB/s 00:14:20.543 Latency(us) 00:14:20.543 [2024-12-07T17:30:53.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.543 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:20.543 xnvme_bdev : 5.00 36208.25 141.44 0.00 0.00 1763.91 354.46 10637.00 00:14:20.543 [2024-12-07T17:30:53.925Z] =================================================================================================================== 00:14:20.543 [2024-12-07T17:30:53.925Z] Total : 36208.25 141.44 0.00 0.00 1763.91 354.46 10637.00 00:14:21.116 17:30:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.116 17:30:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:21.116 17:30:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.116 17:30:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.116 17:30:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.116 { 00:14:21.116 "subsystems": [ 00:14:21.116 { 00:14:21.116 "subsystem": "bdev", 00:14:21.116 "config": [ 00:14:21.116 { 00:14:21.116 "params": { 00:14:21.116 "io_mechanism": "io_uring_cmd", 00:14:21.116 "conserve_cpu": false, 00:14:21.116 "filename": "/dev/ng0n1", 00:14:21.116 "name": "xnvme_bdev" 00:14:21.116 }, 00:14:21.116 "method": "bdev_xnvme_create" 00:14:21.116 }, 00:14:21.116 { 00:14:21.116 "method": "bdev_wait_for_examine" 00:14:21.116 } 00:14:21.116 ] 00:14:21.116 } 00:14:21.116 ] 00:14:21.116 } 00:14:21.116 [2024-12-07 17:30:54.489238] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:21.116 [2024-12-07 17:30:54.489642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70966 ] 00:14:21.378 [2024-12-07 17:30:54.659629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.640 [2024-12-07 17:30:54.774837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.900 Running I/O for 5 seconds... 00:14:23.782 75968.00 IOPS, 296.75 MiB/s [2024-12-07T17:30:58.106Z] 77184.00 IOPS, 301.50 MiB/s [2024-12-07T17:30:59.488Z] 76416.00 IOPS, 298.50 MiB/s [2024-12-07T17:31:00.423Z] 76288.00 IOPS, 298.00 MiB/s [2024-12-07T17:31:00.423Z] 80281.60 IOPS, 313.60 MiB/s 00:14:27.041 Latency(us) 00:14:27.041 [2024-12-07T17:31:00.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.041 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:27.041 xnvme_bdev : 5.00 80257.08 313.50 0.00 0.00 794.02 482.07 2722.26 00:14:27.041 [2024-12-07T17:31:00.423Z] =================================================================================================================== 00:14:27.041 [2024-12-07T17:31:00.423Z] Total : 80257.08 313.50 0.00 0.00 794.02 482.07 2722.26 00:14:27.302 17:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.302 17:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:27.302 17:31:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.302 17:31:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.302 17:31:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.302 { 00:14:27.302 "subsystems": [ 00:14:27.302 { 00:14:27.302 "subsystem": "bdev", 00:14:27.302 "config": [ 00:14:27.302 { 00:14:27.302 "params": { 00:14:27.302 "io_mechanism": "io_uring_cmd", 00:14:27.302 "conserve_cpu": false, 00:14:27.302 "filename": "/dev/ng0n1", 00:14:27.302 "name": "xnvme_bdev" 00:14:27.302 }, 00:14:27.302 "method": "bdev_xnvme_create" 00:14:27.302 }, 00:14:27.302 { 00:14:27.302 "method": "bdev_wait_for_examine" 00:14:27.302 } 00:14:27.302 ] 00:14:27.302 } 00:14:27.302 ] 00:14:27.302 } 00:14:27.561 [2024-12-07 17:31:00.689905] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:27.561 [2024-12-07 17:31:00.690043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71040 ] 00:14:27.561 [2024-12-07 17:31:00.847952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.561 [2024-12-07 17:31:00.921040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.820 Running I/O for 5 seconds... 00:14:30.149 55108.00 IOPS, 215.27 MiB/s [2024-12-07T17:31:04.493Z] 50366.50 IOPS, 196.74 MiB/s [2024-12-07T17:31:05.448Z] 46053.67 IOPS, 179.90 MiB/s [2024-12-07T17:31:06.387Z] 43558.00 IOPS, 170.15 MiB/s [2024-12-07T17:31:06.387Z] 41992.00 IOPS, 164.03 MiB/s 00:14:33.005 Latency(us) 00:14:33.005 [2024-12-07T17:31:06.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.005 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:33.005 xnvme_bdev : 5.00 41972.01 163.95 0.00 0.00 1520.77 162.26 22080.59 00:14:33.005 [2024-12-07T17:31:06.387Z] =================================================================================================================== 00:14:33.005 [2024-12-07T17:31:06.387Z] Total : 41972.01 163.95 0.00 0.00 1520.77 162.26 22080.59 00:14:33.575 00:14:33.575 real 0m25.397s 00:14:33.575 user 0m14.162s 00:14:33.575 sys 0m10.745s 00:14:33.575 17:31:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.575 ************************************ 00:14:33.575 END TEST xnvme_bdevperf 00:14:33.575 ************************************ 00:14:33.575 17:31:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 17:31:06 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:33.836 17:31:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:33.836 17:31:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.836 17:31:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.836 ************************************ 00:14:33.836 START TEST xnvme_fio_plugin 00:14:33.836 ************************************ 00:14:33.836 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:33.836 17:31:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:33.837 17:31:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:33.837 17:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.837 17:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.837 17:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:33.837 17:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.837 17:31:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.837 { 00:14:33.837 "subsystems": [ 00:14:33.837 { 00:14:33.837 "subsystem": "bdev", 00:14:33.837 "config": [ 00:14:33.837 { 00:14:33.837 "params": { 00:14:33.837 "io_mechanism": "io_uring_cmd", 00:14:33.837 "conserve_cpu": false, 00:14:33.837 "filename": "/dev/ng0n1", 00:14:33.837 "name": "xnvme_bdev" 00:14:33.837 }, 00:14:33.837 "method": "bdev_xnvme_create" 00:14:33.837 }, 00:14:33.837 { 00:14:33.837 "method": "bdev_wait_for_examine" 00:14:33.837 } 00:14:33.837 ] 00:14:33.837 } 00:14:33.837 ] 00:14:33.837 } 00:14:33.837 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:33.837 fio-3.35 00:14:33.837 Starting 1 thread 00:14:40.423 00:14:40.423 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71153: Sat Dec 7 17:31:12 2024 00:14:40.423 read: IOPS=39.1k, BW=153MiB/s (160MB/s)(765MiB/5001msec) 00:14:40.423 slat (nsec): min=2864, max=64345, avg=3637.73, stdev=1959.98 00:14:40.423 clat (usec): min=861, max=3399, avg=1488.27, stdev=264.67 00:14:40.423 lat (usec): min=864, max=3442, avg=1491.91, stdev=265.17 00:14:40.423 clat percentiles (usec): 00:14:40.423 | 1.00th=[ 1037], 5.00th=[ 1123], 10.00th=[ 1188], 20.00th=[ 1270], 00:14:40.423 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1450], 60.00th=[ 1516], 00:14:40.423 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1844], 95.00th=[ 1975], 00:14:40.423 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 2769], 99.95th=[ 2868], 00:14:40.423 | 99.99th=[ 3195] 00:14:40.423 bw ( KiB/s): min=145920, max=172544, per=100.00%, avg=157866.67, stdev=10635.56, samples=9 00:14:40.423 iops : min=36480, max=43136, avg=39466.67, stdev=2658.89, samples=9 00:14:40.423 lat (usec) : 1000=0.48% 00:14:40.423 lat (msec) : 2=95.11%, 4=4.41% 00:14:40.423 cpu : usr=36.10%, sys=62.56%, ctx=16, majf=0, minf=762 00:14:40.423 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:40.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.423 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:40.423 issued rwts: total=195712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:40.423 00:14:40.423 Run status group 0 (all jobs): 00:14:40.423 READ: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=765MiB (802MB), run=5001-5001msec 00:14:40.685 ----------------------------------------------------- 00:14:40.685 Suppressions used: 00:14:40.685 count bytes template 00:14:40.685 1 11 /usr/src/fio/parse.c 00:14:40.685 1 8 libtcmalloc_minimal.so 00:14:40.685 1 904 libcrypto.so 00:14:40.685 ----------------------------------------------------- 00:14:40.685 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:40.685 17:31:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:40.685 { 00:14:40.685 "subsystems": [ 00:14:40.685 { 00:14:40.685 "subsystem": "bdev", 00:14:40.685 "config": [ 00:14:40.685 { 00:14:40.685 "params": { 00:14:40.685 "io_mechanism": "io_uring_cmd", 00:14:40.685 "conserve_cpu": false, 00:14:40.685 "filename": "/dev/ng0n1", 00:14:40.685 "name": "xnvme_bdev" 00:14:40.685 }, 00:14:40.685 "method": "bdev_xnvme_create" 00:14:40.685 }, 00:14:40.685 { 00:14:40.685 "method": "bdev_wait_for_examine" 00:14:40.685 } 00:14:40.685 ] 00:14:40.685 } 00:14:40.685 ] 00:14:40.685 } 00:14:40.947 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:40.947 fio-3.35 00:14:40.947 Starting 1 thread 00:14:47.535 00:14:47.535 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71244: Sat Dec 7 17:31:19 2024 00:14:47.535 write: IOPS=41.3k, BW=161MiB/s (169MB/s)(807MiB/5002msec); 0 zone resets 00:14:47.535 slat (usec): min=2, max=123, avg= 4.02, stdev= 1.65 00:14:47.535 clat (usec): min=141, max=5799, avg=1403.13, stdev=302.96 00:14:47.535 lat (usec): min=155, max=5802, avg=1407.16, stdev=303.17 00:14:47.535 clat percentiles (usec): 00:14:47.535 | 1.00th=[ 766], 5.00th=[ 979], 10.00th=[ 1074], 20.00th=[ 1172], 00:14:47.535 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1369], 60.00th=[ 1434], 00:14:47.535 | 70.00th=[ 1516], 80.00th=[ 1614], 90.00th=[ 1762], 95.00th=[ 1893], 00:14:47.535 | 99.00th=[ 2278], 99.50th=[ 2474], 99.90th=[ 3228], 99.95th=[ 3720], 00:14:47.535 | 99.99th=[ 4555] 00:14:47.535 bw ( KiB/s): min=153264, max=175856, per=99.63%, avg=164560.89, stdev=8607.31, samples=9 00:14:47.535 iops : min=38316, max=43964, avg=41140.22, stdev=2151.83, samples=9 00:14:47.535 lat (usec) : 250=0.01%, 500=0.30%, 750=0.56%, 1000=4.90% 00:14:47.535 lat (msec) : 2=91.10%, 4=3.11%, 10=0.03% 00:14:47.535 cpu : usr=38.95%, sys=59.89%, ctx=11, majf=0, minf=763 00:14:47.535 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.9%, 16=22.2%, 32=55.9%, >=64=1.8% 00:14:47.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:47.535 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.4%, >=64=0.0% 00:14:47.535 issued rwts: total=0,206555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:47.535 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:47.535 00:14:47.535 Run status group 0 (all jobs): 00:14:47.535 WRITE: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=807MiB (846MB), run=5002-5002msec 00:14:47.535 ----------------------------------------------------- 00:14:47.535 Suppressions used: 00:14:47.535 count bytes template 00:14:47.535 1 11 /usr/src/fio/parse.c 00:14:47.535 1 8 libtcmalloc_minimal.so 00:14:47.535 1 904 libcrypto.so 00:14:47.535 ----------------------------------------------------- 00:14:47.535 00:14:47.535 ************************************ 00:14:47.535 END TEST xnvme_fio_plugin 00:14:47.535 ************************************ 00:14:47.535 00:14:47.535 real 0m13.833s 00:14:47.535 user 0m6.589s 00:14:47.535 sys 0m6.790s 00:14:47.535 17:31:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.535 17:31:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:47.535 17:31:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:47.535 17:31:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:47.535 17:31:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:47.535 17:31:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:47.535 17:31:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:47.535 17:31:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.535 17:31:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.535 ************************************ 00:14:47.535 START TEST xnvme_rpc 00:14:47.535 ************************************ 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:47.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71329 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71329 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71329 ']' 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.535 17:31:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:47.797 [2024-12-07 17:31:20.992306] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:47.797 [2024-12-07 17:31:20.992470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:14:47.797 [2024-12-07 17:31:21.161388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.059 [2024-12-07 17:31:21.281331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.632 xnvme_bdev 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:48.632 17:31:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71329 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71329 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71329 00:14:48.895 killing process with pid 71329 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71329' 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71329 00:14:48.895 17:31:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71329 00:14:50.814 ************************************ 00:14:50.814 END TEST xnvme_rpc 00:14:50.814 ************************************ 00:14:50.814 00:14:50.814 real 0m2.927s 00:14:50.814 user 0m2.893s 00:14:50.814 sys 0m0.511s 00:14:50.814 17:31:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.814 17:31:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.814 17:31:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:50.814 17:31:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:50.814 17:31:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.814 17:31:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.814 ************************************ 00:14:50.814 START TEST xnvme_bdevperf 00:14:50.814 ************************************ 00:14:50.814 17:31:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.815 17:31:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.815 { 00:14:50.815 "subsystems": [ 00:14:50.815 { 00:14:50.815 "subsystem": "bdev", 00:14:50.815 "config": [ 00:14:50.815 { 00:14:50.815 "params": { 00:14:50.815 "io_mechanism": "io_uring_cmd", 00:14:50.815 "conserve_cpu": true, 00:14:50.815 "filename": "/dev/ng0n1", 00:14:50.815 "name": "xnvme_bdev" 00:14:50.815 }, 00:14:50.815 "method": "bdev_xnvme_create" 00:14:50.815 }, 00:14:50.815 { 00:14:50.815 "method": "bdev_wait_for_examine" 00:14:50.815 } 00:14:50.815 ] 00:14:50.815 } 00:14:50.815 ] 00:14:50.815 } 00:14:50.815 [2024-12-07 17:31:23.960231] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:50.815 [2024-12-07 17:31:23.960535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:14:50.815 [2024-12-07 17:31:24.124922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.076 [2024-12-07 17:31:24.242994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.337 Running I/O for 5 seconds... 00:14:53.222 40511.00 IOPS, 158.25 MiB/s [2024-12-07T17:31:27.546Z] 39454.00 IOPS, 154.12 MiB/s [2024-12-07T17:31:28.930Z] 38496.33 IOPS, 150.38 MiB/s [2024-12-07T17:31:29.871Z] 38136.25 IOPS, 148.97 MiB/s 00:14:56.489 Latency(us) 00:14:56.489 [2024-12-07T17:31:29.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.489 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:56.489 xnvme_bdev : 5.00 38011.45 148.48 0.00 0.00 1679.44 857.01 5116.85 00:14:56.489 [2024-12-07T17:31:29.871Z] =================================================================================================================== 00:14:56.489 [2024-12-07T17:31:29.871Z] Total : 38011.45 148.48 0.00 0.00 1679.44 857.01 5116.85 00:14:57.061 17:31:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.062 17:31:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.062 17:31:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.062 17:31:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.062 17:31:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.062 { 00:14:57.062 "subsystems": [ 00:14:57.062 { 00:14:57.062 "subsystem": "bdev", 00:14:57.062 "config": [ 00:14:57.062 { 00:14:57.062 "params": { 00:14:57.062 "io_mechanism": "io_uring_cmd", 00:14:57.062 "conserve_cpu": true, 00:14:57.062 "filename": "/dev/ng0n1", 00:14:57.062 "name": "xnvme_bdev" 00:14:57.062 }, 00:14:57.062 "method": "bdev_xnvme_create" 00:14:57.062 }, 00:14:57.062 { 00:14:57.062 "method": "bdev_wait_for_examine" 00:14:57.062 } 00:14:57.062 ] 00:14:57.062 } 00:14:57.062 ] 00:14:57.062 } 00:14:57.062 [2024-12-07 17:31:30.420757] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:14:57.062 [2024-12-07 17:31:30.420923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71476 ] 00:14:57.324 [2024-12-07 17:31:30.592006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.587 [2024-12-07 17:31:30.714602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.849 Running I/O for 5 seconds... 00:14:59.738 36596.00 IOPS, 142.95 MiB/s [2024-12-07T17:31:34.062Z] 38053.00 IOPS, 148.64 MiB/s [2024-12-07T17:31:35.450Z] 38933.67 IOPS, 152.08 MiB/s [2024-12-07T17:31:36.020Z] 40151.75 IOPS, 156.84 MiB/s 00:15:02.638 Latency(us) 00:15:02.638 [2024-12-07T17:31:36.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.638 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:02.638 xnvme_bdev : 5.00 39781.92 155.40 0.00 0.00 1604.03 686.87 8217.21 00:15:02.638 [2024-12-07T17:31:36.020Z] =================================================================================================================== 00:15:02.638 [2024-12-07T17:31:36.021Z] Total : 39781.92 155.40 0.00 0.00 1604.03 686.87 8217.21 00:15:03.580 17:31:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.580 17:31:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:03.580 17:31:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:03.580 17:31:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:03.580 17:31:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 { 00:15:03.580 "subsystems": [ 00:15:03.580 { 00:15:03.580 "subsystem": "bdev", 00:15:03.580 "config": [ 00:15:03.580 { 00:15:03.580 "params": { 00:15:03.580 "io_mechanism": "io_uring_cmd", 00:15:03.580 "conserve_cpu": true, 00:15:03.580 "filename": "/dev/ng0n1", 00:15:03.580 "name": "xnvme_bdev" 00:15:03.580 }, 00:15:03.580 "method": "bdev_xnvme_create" 00:15:03.580 }, 00:15:03.580 { 00:15:03.580 "method": "bdev_wait_for_examine" 00:15:03.580 } 00:15:03.580 ] 00:15:03.580 } 00:15:03.580 ] 00:15:03.580 } 00:15:03.580 [2024-12-07 17:31:36.892123] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:03.580 [2024-12-07 17:31:36.892274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71547 ] 00:15:03.842 [2024-12-07 17:31:37.053690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.842 [2024-12-07 17:31:37.176880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.103 Running I/O for 5 seconds... 00:15:06.112 79104.00 IOPS, 309.00 MiB/s [2024-12-07T17:31:40.903Z] 79296.00 IOPS, 309.75 MiB/s [2024-12-07T17:31:41.474Z] 79317.33 IOPS, 309.83 MiB/s [2024-12-07T17:31:42.850Z] 79440.00 IOPS, 310.31 MiB/s [2024-12-07T17:31:42.850Z] 82918.40 IOPS, 323.90 MiB/s 00:15:09.468 Latency(us) 00:15:09.468 [2024-12-07T17:31:42.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.468 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:09.468 xnvme_bdev : 5.00 82875.39 323.73 0.00 0.00 768.84 397.00 3125.56 00:15:09.468 [2024-12-07T17:31:42.850Z] =================================================================================================================== 00:15:09.468 [2024-12-07T17:31:42.850Z] Total : 82875.39 323.73 0.00 0.00 768.84 397.00 3125.56 00:15:09.728 17:31:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.728 17:31:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:09.728 17:31:43 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:09.728 17:31:43 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:09.728 17:31:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.728 { 00:15:09.728 "subsystems": [ 00:15:09.728 { 00:15:09.728 "subsystem": "bdev", 00:15:09.728 "config": [ 00:15:09.728 { 00:15:09.728 "params": { 00:15:09.728 "io_mechanism": "io_uring_cmd", 00:15:09.728 "conserve_cpu": true, 00:15:09.728 "filename": "/dev/ng0n1", 00:15:09.728 "name": "xnvme_bdev" 00:15:09.728 }, 00:15:09.728 "method": "bdev_xnvme_create" 00:15:09.728 }, 00:15:09.728 { 00:15:09.728 "method": "bdev_wait_for_examine" 00:15:09.728 } 00:15:09.728 ] 00:15:09.728 } 00:15:09.728 ] 00:15:09.728 } 00:15:09.728 [2024-12-07 17:31:43.095406] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:09.728 [2024-12-07 17:31:43.095677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71621 ] 00:15:09.988 [2024-12-07 17:31:43.253657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.988 [2024-12-07 17:31:43.340005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.247 Running I/O for 5 seconds... 00:15:12.565 50282.00 IOPS, 196.41 MiB/s [2024-12-07T17:31:46.885Z] 47450.00 IOPS, 185.35 MiB/s [2024-12-07T17:31:47.822Z] 44756.00 IOPS, 174.83 MiB/s [2024-12-07T17:31:48.760Z] 42897.00 IOPS, 167.57 MiB/s 00:15:15.378 Latency(us) 00:15:15.378 [2024-12-07T17:31:48.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.378 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:15.378 xnvme_bdev : 5.00 41972.53 163.96 0.00 0.00 1520.06 77.59 22483.89 00:15:15.378 [2024-12-07T17:31:48.760Z] =================================================================================================================== 00:15:15.378 [2024-12-07T17:31:48.760Z] Total : 41972.53 163.96 0.00 0.00 1520.06 77.59 22483.89 00:15:16.319 00:15:16.319 real 0m25.445s 00:15:16.319 user 0m15.921s 00:15:16.319 sys 0m7.227s 00:15:16.319 17:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.319 17:31:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:16.319 ************************************ 00:15:16.319 END TEST xnvme_bdevperf 00:15:16.319 ************************************ 00:15:16.319 17:31:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:16.319 17:31:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.319 17:31:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.319 17:31:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.319 ************************************ 00:15:16.319 START TEST xnvme_fio_plugin 00:15:16.319 ************************************ 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:16.319 17:31:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:16.319 { 00:15:16.319 "subsystems": [ 00:15:16.319 { 00:15:16.319 "subsystem": "bdev", 00:15:16.319 "config": [ 00:15:16.319 { 00:15:16.319 "params": { 00:15:16.319 "io_mechanism": "io_uring_cmd", 00:15:16.319 "conserve_cpu": true, 00:15:16.319 "filename": "/dev/ng0n1", 00:15:16.319 "name": "xnvme_bdev" 00:15:16.319 }, 00:15:16.319 "method": "bdev_xnvme_create" 00:15:16.319 }, 00:15:16.319 { 00:15:16.319 "method": "bdev_wait_for_examine" 00:15:16.319 } 00:15:16.319 ] 00:15:16.319 } 00:15:16.319 ] 00:15:16.319 } 00:15:16.319 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:16.319 fio-3.35 00:15:16.319 Starting 1 thread 00:15:22.921 00:15:22.921 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71734: Sat Dec 7 17:31:55 2024 00:15:22.921 read: IOPS=37.9k, BW=148MiB/s (155MB/s)(740MiB/5001msec) 00:15:22.921 slat (usec): min=2, max=254, avg= 3.79, stdev= 2.57 00:15:22.921 clat (usec): min=898, max=6265, avg=1535.67, stdev=259.52 00:15:22.921 lat (usec): min=901, max=6268, avg=1539.46, stdev=259.92 00:15:22.921 clat percentiles (usec): 00:15:22.921 | 1.00th=[ 1057], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1319], 00:15:22.921 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1500], 60.00th=[ 1565], 00:15:22.921 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1991], 00:15:22.921 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 2835], 99.95th=[ 2999], 00:15:22.921 | 99.99th=[ 3785] 00:15:22.921 bw ( KiB/s): min=138752, max=166912, per=99.98%, avg=151521.78, stdev=8971.82, samples=9 00:15:22.921 iops : min=34688, max=41728, avg=37880.44, stdev=2242.96, samples=9 00:15:22.921 lat (usec) : 1000=0.17% 00:15:22.921 lat (msec) : 2=94.90%, 4=4.93%, 10=0.01% 00:15:22.921 cpu : usr=47.48%, sys=48.60%, ctx=73, majf=0, minf=762 00:15:22.921 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:22.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.921 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:22.921 issued rwts: total=189470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:22.921 00:15:22.921 Run status group 0 (all jobs): 00:15:22.921 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=740MiB (776MB), run=5001-5001msec 00:15:22.921 ----------------------------------------------------- 00:15:22.921 Suppressions used: 00:15:22.921 count bytes template 00:15:22.921 1 11 /usr/src/fio/parse.c 00:15:22.921 1 8 libtcmalloc_minimal.so 00:15:22.921 1 904 libcrypto.so 00:15:22.921 ----------------------------------------------------- 00:15:22.921 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:22.921 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:23.181 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:23.182 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:23.182 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:23.182 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:23.182 17:31:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:23.182 { 00:15:23.182 "subsystems": [ 00:15:23.182 { 00:15:23.182 "subsystem": "bdev", 00:15:23.182 "config": [ 00:15:23.182 { 00:15:23.182 "params": { 00:15:23.182 "io_mechanism": "io_uring_cmd", 00:15:23.182 "conserve_cpu": true, 00:15:23.182 "filename": "/dev/ng0n1", 00:15:23.182 "name": "xnvme_bdev" 00:15:23.182 }, 00:15:23.182 "method": "bdev_xnvme_create" 00:15:23.182 }, 00:15:23.182 { 00:15:23.182 "method": "bdev_wait_for_examine" 00:15:23.182 } 00:15:23.182 ] 00:15:23.182 } 00:15:23.182 ] 00:15:23.182 } 00:15:23.182 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:23.182 fio-3.35 00:15:23.182 Starting 1 thread 00:15:29.766 00:15:29.766 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71830: Sat Dec 7 17:32:02 2024 00:15:29.766 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(762MiB/5002msec); 0 zone resets 00:15:29.766 slat (usec): min=2, max=222, avg= 4.18, stdev= 2.36 00:15:29.767 clat (usec): min=796, max=5325, avg=1475.86, stdev=265.62 00:15:29.767 lat (usec): min=812, max=5356, avg=1480.04, stdev=266.27 00:15:29.767 clat percentiles (usec): 00:15:29.767 | 1.00th=[ 1045], 5.00th=[ 1139], 10.00th=[ 1188], 20.00th=[ 1270], 00:15:29.767 | 30.00th=[ 1336], 40.00th=[ 1385], 50.00th=[ 1434], 60.00th=[ 1500], 00:15:29.767 | 70.00th=[ 1565], 80.00th=[ 1647], 90.00th=[ 1795], 95.00th=[ 1942], 00:15:29.767 | 99.00th=[ 2278], 99.50th=[ 2409], 99.90th=[ 3458], 99.95th=[ 4015], 00:15:29.767 | 99.99th=[ 4555] 00:15:29.767 bw ( KiB/s): min=149616, max=177848, per=100.00%, avg=156561.78, stdev=8700.79, samples=9 00:15:29.767 iops : min=37404, max=44462, avg=39140.44, stdev=2175.20, samples=9 00:15:29.767 lat (usec) : 1000=0.35% 00:15:29.767 lat (msec) : 2=95.86%, 4=3.74%, 10=0.05% 00:15:29.767 cpu : usr=50.69%, sys=44.31%, ctx=18, majf=0, minf=763 00:15:29.767 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:29.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.767 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:29.767 issued rwts: total=0,194981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:29.767 00:15:29.767 Run status group 0 (all jobs): 00:15:29.767 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=762MiB (799MB), run=5002-5002msec 00:15:30.029 ----------------------------------------------------- 00:15:30.029 Suppressions used: 00:15:30.029 count bytes template 00:15:30.029 1 11 /usr/src/fio/parse.c 00:15:30.029 1 8 libtcmalloc_minimal.so 00:15:30.029 1 904 libcrypto.so 00:15:30.029 ----------------------------------------------------- 00:15:30.029 00:15:30.029 00:15:30.029 real 0m13.867s 00:15:30.029 user 0m7.806s 00:15:30.029 sys 0m5.288s 00:15:30.029 ************************************ 00:15:30.029 END TEST xnvme_fio_plugin 00:15:30.029 ************************************ 00:15:30.029 17:32:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.029 17:32:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 17:32:03 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71329 00:15:30.029 17:32:03 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:15:30.029 17:32:03 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71329 00:15:30.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71329) - No such process 00:15:30.029 Process with pid 71329 is not found 00:15:30.029 17:32:03 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71329 is not found' 00:15:30.029 00:15:30.029 real 3m29.560s 00:15:30.029 user 1m58.037s 00:15:30.029 sys 1m16.826s 00:15:30.029 17:32:03 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:30.029 ************************************ 00:15:30.029 END TEST nvme_xnvme 00:15:30.029 ************************************ 00:15:30.029 17:32:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 17:32:03 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:30.029 17:32:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:30.029 17:32:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:30.029 17:32:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 ************************************ 00:15:30.029 START TEST blockdev_xnvme 00:15:30.029 ************************************ 00:15:30.029 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:30.291 * Looking for test storage... 00:15:30.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:30.291 17:32:03 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.291 --rc genhtml_branch_coverage=1 00:15:30.291 --rc genhtml_function_coverage=1 00:15:30.291 --rc genhtml_legend=1 00:15:30.291 --rc geninfo_all_blocks=1 00:15:30.291 --rc geninfo_unexecuted_blocks=1 00:15:30.291 00:15:30.291 ' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.291 --rc genhtml_branch_coverage=1 00:15:30.291 --rc genhtml_function_coverage=1 00:15:30.291 --rc genhtml_legend=1 00:15:30.291 --rc geninfo_all_blocks=1 00:15:30.291 --rc geninfo_unexecuted_blocks=1 00:15:30.291 00:15:30.291 ' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.291 --rc genhtml_branch_coverage=1 00:15:30.291 --rc genhtml_function_coverage=1 00:15:30.291 --rc genhtml_legend=1 00:15:30.291 --rc geninfo_all_blocks=1 00:15:30.291 --rc geninfo_unexecuted_blocks=1 00:15:30.291 00:15:30.291 ' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:30.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:30.291 --rc genhtml_branch_coverage=1 00:15:30.291 --rc genhtml_function_coverage=1 00:15:30.291 --rc genhtml_legend=1 00:15:30.291 --rc geninfo_all_blocks=1 00:15:30.291 --rc geninfo_unexecuted_blocks=1 00:15:30.291 00:15:30.291 ' 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71959 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71959 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71959 ']' 00:15:30.291 17:32:03 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.291 17:32:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:30.291 [2024-12-07 17:32:03.656338] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:30.291 [2024-12-07 17:32:03.656479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71959 ] 00:15:30.553 [2024-12-07 17:32:03.818683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.815 [2024-12-07 17:32:03.970541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.388 17:32:04 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.388 17:32:04 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:31.388 17:32:04 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:31.388 17:32:04 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:31.388 17:32:04 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:31.388 17:32:04 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:31.388 17:32:04 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:31.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.529 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:32.529 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:32.529 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:32.529 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:32.529 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:32.529 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:32.530 nvme0n1 00:15:32.530 nvme0n2 00:15:32.530 nvme0n3 00:15:32.530 nvme1n1 00:15:32.530 nvme2n1 00:15:32.530 nvme3n1 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.530 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.530 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "67e15ecd-41a1-479a-9823-8a6bfc890ecc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67e15ecd-41a1-479a-9823-8a6bfc890ecc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c401c15b-98ad-4815-8a49-0799e4da3e91"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c401c15b-98ad-4815-8a49-0799e4da3e91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bf019948-1bb0-4169-af61-271d8edb50e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf019948-1bb0-4169-af61-271d8edb50e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5afb8f90-090f-41de-ad09-ece5e56d86de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5afb8f90-090f-41de-ad09-ece5e56d86de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0dada14e-7e25-4265-83da-3956894e4b5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0dada14e-7e25-4265-83da-3956894e4b5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b8ccc584-065c-4e29-986b-d66e34c1a51e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b8ccc584-065c-4e29-986b-d66e34c1a51e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:32.791 17:32:05 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71959 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71959 ']' 00:15:32.791 17:32:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71959 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71959 00:15:32.791 killing process with pid 71959 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71959' 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71959 00:15:32.791 17:32:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71959 00:15:34.176 17:32:07 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:34.176 17:32:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:34.176 17:32:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:34.176 17:32:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.176 17:32:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.176 ************************************ 00:15:34.176 START TEST bdev_hello_world 00:15:34.176 ************************************ 00:15:34.176 17:32:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:34.176 [2024-12-07 17:32:07.347271] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:34.176 [2024-12-07 17:32:07.347367] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72243 ] 00:15:34.176 [2024-12-07 17:32:07.495834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.437 [2024-12-07 17:32:07.586911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.699 [2024-12-07 17:32:07.915905] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:34.699 [2024-12-07 17:32:07.915951] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:34.699 [2024-12-07 17:32:07.915967] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:34.699 [2024-12-07 17:32:07.917954] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:34.699 [2024-12-07 17:32:07.918494] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:34.699 [2024-12-07 17:32:07.918513] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:34.699 [2024-12-07 17:32:07.919128] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:34.699 00:15:34.699 [2024-12-07 17:32:07.919169] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:35.273 00:15:35.273 real 0m1.222s 00:15:35.273 user 0m0.885s 00:15:35.273 sys 0m0.203s 00:15:35.273 ************************************ 00:15:35.273 END TEST bdev_hello_world 00:15:35.273 ************************************ 00:15:35.273 17:32:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.273 17:32:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:35.273 17:32:08 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:35.273 17:32:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.273 17:32:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.273 17:32:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.273 ************************************ 00:15:35.273 START TEST bdev_bounds 00:15:35.273 ************************************ 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:35.273 Process bdevio pid: 72274 00:15:35.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72274 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72274' 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72274 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72274 ']' 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.273 17:32:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:35.273 [2024-12-07 17:32:08.642203] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:35.273 [2024-12-07 17:32:08.642489] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72274 ] 00:15:35.535 [2024-12-07 17:32:08.798689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.535 [2024-12-07 17:32:08.890847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.535 [2024-12-07 17:32:08.891357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.535 [2024-12-07 17:32:08.891358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.476 17:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.476 17:32:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:36.476 17:32:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:36.476 I/O targets: 00:15:36.476 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:36.476 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:36.476 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:36.476 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:36.476 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:36.476 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:36.476 00:15:36.476 00:15:36.476 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.476 http://cunit.sourceforge.net/ 00:15:36.476 00:15:36.476 00:15:36.476 Suite: bdevio tests on: nvme3n1 00:15:36.476 Test: blockdev write read block ...passed 00:15:36.476 Test: blockdev write zeroes read block ...passed 00:15:36.476 Test: blockdev write zeroes read no split ...passed 00:15:36.476 Test: blockdev write zeroes read split ...passed 00:15:36.476 Test: blockdev write zeroes read split partial ...passed 00:15:36.476 Test: blockdev reset ...passed 00:15:36.476 Test: blockdev write read 8 blocks ...passed 00:15:36.476 Test: blockdev write read size > 128k ...passed 00:15:36.476 Test: blockdev write read invalid size ...passed 00:15:36.476 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.476 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.476 Test: blockdev write read max offset ...passed 00:15:36.476 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.476 Test: blockdev writev readv 8 blocks ...passed 00:15:36.476 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.476 Test: blockdev writev readv block ...passed 00:15:36.476 Test: blockdev writev readv size > 128k ...passed 00:15:36.476 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.476 Test: blockdev comparev and writev ...passed 00:15:36.476 Test: blockdev nvme passthru rw ...passed 00:15:36.476 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.476 Test: blockdev nvme admin passthru ...passed 00:15:36.476 Test: blockdev copy ...passed 00:15:36.476 Suite: bdevio tests on: nvme2n1 00:15:36.476 Test: blockdev write read block ...passed 00:15:36.476 Test: blockdev write zeroes read block ...passed 00:15:36.476 Test: blockdev write zeroes read no split ...passed 00:15:36.476 Test: blockdev write zeroes read split ...passed 00:15:36.476 Test: blockdev write zeroes read split partial ...passed 00:15:36.476 Test: blockdev reset ...passed 00:15:36.476 Test: blockdev write read 8 blocks ...passed 00:15:36.477 Test: blockdev write read size > 128k ...passed 00:15:36.477 Test: blockdev write read invalid size ...passed 00:15:36.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.477 Test: blockdev write read max offset ...passed 00:15:36.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.477 Test: blockdev writev readv 8 blocks ...passed 00:15:36.477 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.477 Test: blockdev writev readv block ...passed 00:15:36.477 Test: blockdev writev readv size > 128k ...passed 00:15:36.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.477 Test: blockdev comparev and writev ...passed 00:15:36.477 Test: blockdev nvme passthru rw ...passed 00:15:36.477 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.477 Test: blockdev nvme admin passthru ...passed 00:15:36.477 Test: blockdev copy ...passed 00:15:36.477 Suite: bdevio tests on: nvme1n1 00:15:36.477 Test: blockdev write read block ...passed 00:15:36.477 Test: blockdev write zeroes read block ...passed 00:15:36.477 Test: blockdev write zeroes read no split ...passed 00:15:36.477 Test: blockdev write zeroes read split ...passed 00:15:36.477 Test: blockdev write zeroes read split partial ...passed 00:15:36.477 Test: blockdev reset ...passed 00:15:36.477 Test: blockdev write read 8 blocks ...passed 00:15:36.477 Test: blockdev write read size > 128k ...passed 00:15:36.477 Test: blockdev write read invalid size ...passed 00:15:36.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.477 Test: blockdev write read max offset ...passed 00:15:36.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.477 Test: blockdev writev readv 8 blocks ...passed 00:15:36.477 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.477 Test: blockdev writev readv block ...passed 00:15:36.477 Test: blockdev writev readv size > 128k ...passed 00:15:36.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.477 Test: blockdev comparev and writev ...passed 00:15:36.477 Test: blockdev nvme passthru rw ...passed 00:15:36.477 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.477 Test: blockdev nvme admin passthru ...passed 00:15:36.477 Test: blockdev copy ...passed 00:15:36.477 Suite: bdevio tests on: nvme0n3 00:15:36.477 Test: blockdev write read block ...passed 00:15:36.477 Test: blockdev write zeroes read block ...passed 00:15:36.477 Test: blockdev write zeroes read no split ...passed 00:15:36.477 Test: blockdev write zeroes read split ...passed 00:15:36.738 Test: blockdev write zeroes read split partial ...passed 00:15:36.738 Test: blockdev reset ...passed 00:15:36.738 Test: blockdev write read 8 blocks ...passed 00:15:36.738 Test: blockdev write read size > 128k ...passed 00:15:36.738 Test: blockdev write read invalid size ...passed 00:15:36.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.738 Test: blockdev write read max offset ...passed 00:15:36.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.738 Test: blockdev writev readv 8 blocks ...passed 00:15:36.738 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.738 Test: blockdev writev readv block ...passed 00:15:36.738 Test: blockdev writev readv size > 128k ...passed 00:15:36.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.738 Test: blockdev comparev and writev ...passed 00:15:36.738 Test: blockdev nvme passthru rw ...passed 00:15:36.738 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.738 Test: blockdev nvme admin passthru ...passed 00:15:36.738 Test: blockdev copy ...passed 00:15:36.738 Suite: bdevio tests on: nvme0n2 00:15:36.738 Test: blockdev write read block ...passed 00:15:36.738 Test: blockdev write zeroes read block ...passed 00:15:36.738 Test: blockdev write zeroes read no split ...passed 00:15:36.738 Test: blockdev write zeroes read split ...passed 00:15:36.738 Test: blockdev write zeroes read split partial ...passed 00:15:36.738 Test: blockdev reset ...passed 00:15:36.738 Test: blockdev write read 8 blocks ...passed 00:15:36.738 Test: blockdev write read size > 128k ...passed 00:15:36.738 Test: blockdev write read invalid size ...passed 00:15:36.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.738 Test: blockdev write read max offset ...passed 00:15:36.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.738 Test: blockdev writev readv 8 blocks ...passed 00:15:36.738 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.738 Test: blockdev writev readv block ...passed 00:15:36.738 Test: blockdev writev readv size > 128k ...passed 00:15:36.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.739 Test: blockdev comparev and writev ...passed 00:15:36.739 Test: blockdev nvme passthru rw ...passed 00:15:36.739 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.739 Test: blockdev nvme admin passthru ...passed 00:15:36.739 Test: blockdev copy ...passed 00:15:36.739 Suite: bdevio tests on: nvme0n1 00:15:36.739 Test: blockdev write read block ...passed 00:15:36.739 Test: blockdev write zeroes read block ...passed 00:15:36.739 Test: blockdev write zeroes read no split ...passed 00:15:36.739 Test: blockdev write zeroes read split ...passed 00:15:36.739 Test: blockdev write zeroes read split partial ...passed 00:15:36.739 Test: blockdev reset ...passed 00:15:36.739 Test: blockdev write read 8 blocks ...passed 00:15:36.739 Test: blockdev write read size > 128k ...passed 00:15:36.739 Test: blockdev write read invalid size ...passed 00:15:36.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.739 Test: blockdev write read max offset ...passed 00:15:36.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:36.739 Test: blockdev writev readv 8 blocks ...passed 00:15:36.739 Test: blockdev writev readv 30 x 1block ...passed 00:15:36.739 Test: blockdev writev readv block ...passed 00:15:36.739 Test: blockdev writev readv size > 128k ...passed 00:15:36.739 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:36.739 Test: blockdev comparev and writev ...passed 00:15:36.739 Test: blockdev nvme passthru rw ...passed 00:15:36.739 Test: blockdev nvme passthru vendor specific ...passed 00:15:36.739 Test: blockdev nvme admin passthru ...passed 00:15:36.739 Test: blockdev copy ...passed 00:15:36.739 00:15:36.739 Run Summary: Type Total Ran Passed Failed Inactive 00:15:36.739 suites 6 6 n/a 0 0 00:15:36.739 tests 138 138 138 0 0 00:15:36.739 asserts 780 780 780 0 n/a 00:15:36.739 00:15:36.739 Elapsed time = 1.244 seconds 00:15:36.739 0 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72274 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72274 ']' 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72274 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72274 00:15:36.739 killing process with pid 72274 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72274' 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72274 00:15:36.739 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72274 00:15:37.680 17:32:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:37.680 00:15:37.680 real 0m2.389s 00:15:37.680 user 0m5.917s 00:15:37.680 sys 0m0.311s 00:15:37.680 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.680 ************************************ 00:15:37.680 END TEST bdev_bounds 00:15:37.680 ************************************ 00:15:37.680 17:32:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 17:32:11 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:37.680 17:32:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:37.680 17:32:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.680 17:32:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 ************************************ 00:15:37.680 START TEST bdev_nbd 00:15:37.680 ************************************ 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:37.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72335 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72335 /var/tmp/spdk-nbd.sock 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72335 ']' 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:37.941 [2024-12-07 17:32:11.114894] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:15:37.941 [2024-12-07 17:32:11.115206] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.941 [2024-12-07 17:32:11.277635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.202 [2024-12-07 17:32:11.411724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:38.775 17:32:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.035 1+0 records in 00:15:39.035 1+0 records out 00:15:39.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000951316 s, 4.3 MB/s 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.035 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.302 1+0 records in 00:15:39.302 1+0 records out 00:15:39.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153629 s, 2.7 MB/s 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.302 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.562 1+0 records in 00:15:39.562 1+0 records out 00:15:39.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124397 s, 3.3 MB/s 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.562 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:39.822 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:39.822 17:32:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:39.822 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:39.822 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:39.822 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:39.822 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:39.822 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:39.823 1+0 records in 00:15:39.823 1+0 records out 00:15:39.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123431 s, 3.3 MB/s 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:39.823 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.109 1+0 records in 00:15:40.109 1+0 records out 00:15:40.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949944 s, 4.3 MB/s 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.109 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.379 1+0 records in 00:15:40.379 1+0 records out 00:15:40.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941308 s, 4.4 MB/s 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd0", 00:15:40.379 "bdev_name": "nvme0n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd1", 00:15:40.379 "bdev_name": "nvme0n2" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd2", 00:15:40.379 "bdev_name": "nvme0n3" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd3", 00:15:40.379 "bdev_name": "nvme1n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd4", 00:15:40.379 "bdev_name": "nvme2n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd5", 00:15:40.379 "bdev_name": "nvme3n1" 00:15:40.379 } 00:15:40.379 ]' 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd0", 00:15:40.379 "bdev_name": "nvme0n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd1", 00:15:40.379 "bdev_name": "nvme0n2" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd2", 00:15:40.379 "bdev_name": "nvme0n3" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd3", 00:15:40.379 "bdev_name": "nvme1n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd4", 00:15:40.379 "bdev_name": "nvme2n1" 00:15:40.379 }, 00:15:40.379 { 00:15:40.379 "nbd_device": "/dev/nbd5", 00:15:40.379 "bdev_name": "nvme3n1" 00:15:40.379 } 00:15:40.379 ]' 00:15:40.379 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.639 17:32:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.899 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.159 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.420 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:41.680 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:41.680 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:41.680 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:41.680 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.681 17:32:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.946 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.210 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:42.471 /dev/nbd0 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.471 1+0 records in 00:15:42.471 1+0 records out 00:15:42.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905521 s, 4.5 MB/s 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.471 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:42.732 /dev/nbd1 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.732 1+0 records in 00:15:42.732 1+0 records out 00:15:42.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075541 s, 5.4 MB/s 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.732 17:32:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:42.994 /dev/nbd10 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.994 1+0 records in 00:15:42.994 1+0 records out 00:15:42.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102466 s, 4.0 MB/s 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:42.994 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:42.994 /dev/nbd11 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.256 1+0 records in 00:15:43.256 1+0 records out 00:15:43.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828048 s, 4.9 MB/s 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.256 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:43.256 /dev/nbd12 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.518 1+0 records in 00:15:43.518 1+0 records out 00:15:43.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120096 s, 3.4 MB/s 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:43.518 /dev/nbd13 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:43.518 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.779 1+0 records in 00:15:43.779 1+0 records out 00:15:43.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106946 s, 3.8 MB/s 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.779 17:32:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.779 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd0", 00:15:43.779 "bdev_name": "nvme0n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd1", 00:15:43.779 "bdev_name": "nvme0n2" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd10", 00:15:43.779 "bdev_name": "nvme0n3" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd11", 00:15:43.779 "bdev_name": "nvme1n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd12", 00:15:43.779 "bdev_name": "nvme2n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd13", 00:15:43.779 "bdev_name": "nvme3n1" 00:15:43.779 } 00:15:43.779 ]' 00:15:43.779 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd0", 00:15:43.779 "bdev_name": "nvme0n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd1", 00:15:43.779 "bdev_name": "nvme0n2" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd10", 00:15:43.779 "bdev_name": "nvme0n3" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd11", 00:15:43.779 "bdev_name": "nvme1n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd12", 00:15:43.779 "bdev_name": "nvme2n1" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "nbd_device": "/dev/nbd13", 00:15:43.779 "bdev_name": "nvme3n1" 00:15:43.779 } 00:15:43.779 ]' 00:15:43.779 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:44.041 /dev/nbd1 00:15:44.041 /dev/nbd10 00:15:44.041 /dev/nbd11 00:15:44.041 /dev/nbd12 00:15:44.041 /dev/nbd13' 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:44.041 /dev/nbd1 00:15:44.041 /dev/nbd10 00:15:44.041 /dev/nbd11 00:15:44.041 /dev/nbd12 00:15:44.041 /dev/nbd13' 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:44.041 256+0 records in 00:15:44.041 256+0 records out 00:15:44.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117046 s, 89.6 MB/s 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.041 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:44.302 256+0 records in 00:15:44.302 256+0 records out 00:15:44.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248498 s, 4.2 MB/s 00:15:44.302 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.302 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:44.302 256+0 records in 00:15:44.302 256+0 records out 00:15:44.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202431 s, 5.2 MB/s 00:15:44.302 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.302 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:44.560 256+0 records in 00:15:44.560 256+0 records out 00:15:44.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169261 s, 6.2 MB/s 00:15:44.560 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.560 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:44.821 256+0 records in 00:15:44.821 256+0 records out 00:15:44.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13019 s, 8.1 MB/s 00:15:44.821 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:44.821 17:32:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:45.083 256+0 records in 00:15:45.083 256+0 records out 00:15:45.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.287667 s, 3.6 MB/s 00:15:45.083 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:45.083 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:45.345 256+0 records in 00:15:45.345 256+0 records out 00:15:45.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247326 s, 4.2 MB/s 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.345 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.607 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.869 17:32:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.869 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.131 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.392 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.668 17:32:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:46.930 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:47.191 malloc_lvol_verify 00:15:47.191 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:47.453 7af6edf6-2b6f-4b4e-b602-f0c4842897ba 00:15:47.453 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:47.715 a3d13256-68a6-446d-b7df-a5108daf4f48 00:15:47.715 17:32:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:47.715 /dev/nbd0 00:15:47.975 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:47.975 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:47.975 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:47.975 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:47.975 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:47.975 mke2fs 1.47.0 (5-Feb-2023) 00:15:47.975 Discarding device blocks: 0/4096 done 00:15:47.975 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:47.976 00:15:47.976 Allocating group tables: 0/1 done 00:15:47.976 Writing inode tables: 0/1 done 00:15:47.976 Creating journal (1024 blocks): done 00:15:47.976 Writing superblocks and filesystem accounting information: 0/1 done 00:15:47.976 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72335 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72335 ']' 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72335 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72335 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.976 killing process with pid 72335 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72335' 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72335 00:15:47.976 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72335 00:15:48.919 ************************************ 00:15:48.919 END TEST bdev_nbd 00:15:48.919 ************************************ 00:15:48.919 17:32:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:48.919 00:15:48.919 real 0m10.946s 00:15:48.919 user 0m14.821s 00:15:48.919 sys 0m3.829s 00:15:48.919 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.919 17:32:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:48.919 17:32:22 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:48.919 17:32:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:48.919 17:32:22 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:48.919 17:32:22 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:48.919 17:32:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.919 17:32:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.919 17:32:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.919 ************************************ 00:15:48.919 START TEST bdev_fio 00:15:48.919 ************************************ 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:48.919 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:48.919 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:48.920 ************************************ 00:15:48.920 START TEST bdev_fio_rw_verify 00:15:48.920 ************************************ 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.920 17:32:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:49.180 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.180 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.180 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.180 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.181 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.181 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:49.181 fio-3.35 00:15:49.181 Starting 6 threads 00:16:01.409 00:16:01.409 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72740: Sat Dec 7 17:32:33 2024 00:16:01.409 read: IOPS=15.0k, BW=58.5MiB/s (61.4MB/s)(585MiB/10002msec) 00:16:01.409 slat (usec): min=2, max=2013, avg= 6.87, stdev=15.59 00:16:01.409 clat (usec): min=89, max=7280, avg=1311.98, stdev=741.90 00:16:01.409 lat (usec): min=92, max=7296, avg=1318.85, stdev=742.55 00:16:01.409 clat percentiles (usec): 00:16:01.409 | 50.000th=[ 1221], 99.000th=[ 3654], 99.900th=[ 5276], 99.990th=[ 6980], 00:16:01.409 | 99.999th=[ 7308] 00:16:01.409 write: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(597MiB/10002msec); 0 zone resets 00:16:01.409 slat (usec): min=10, max=7033, avg=41.41, stdev=136.98 00:16:01.409 clat (usec): min=79, max=8242, avg=1527.40, stdev=781.36 00:16:01.409 lat (usec): min=95, max=8263, avg=1568.80, stdev=793.76 00:16:01.409 clat percentiles (usec): 00:16:01.409 | 50.000th=[ 1418], 99.000th=[ 3982], 99.900th=[ 5604], 99.990th=[ 7898], 00:16:01.409 | 99.999th=[ 8225] 00:16:01.409 bw ( KiB/s): min=49010, max=82757, per=100.00%, avg=61611.16, stdev=1801.11, samples=114 00:16:01.409 iops : min=12248, max=20689, avg=15401.95, stdev=450.34, samples=114 00:16:01.409 lat (usec) : 100=0.01%, 250=2.12%, 500=6.64%, 750=9.63%, 1000=12.71% 00:16:01.409 lat (msec) : 2=50.24%, 4=17.88%, 10=0.78% 00:16:01.409 cpu : usr=43.08%, sys=32.86%, ctx=5586, majf=0, minf=15059 00:16:01.409 IO depths : 1=11.3%, 2=23.8%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:01.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.409 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.409 issued rwts: total=149818,152886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:01.409 00:16:01.409 Run status group 0 (all jobs): 00:16:01.409 READ: bw=58.5MiB/s (61.4MB/s), 58.5MiB/s-58.5MiB/s (61.4MB/s-61.4MB/s), io=585MiB (614MB), run=10002-10002msec 00:16:01.409 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=597MiB (626MB), run=10002-10002msec 00:16:01.409 ----------------------------------------------------- 00:16:01.409 Suppressions used: 00:16:01.409 count bytes template 00:16:01.409 6 48 /usr/src/fio/parse.c 00:16:01.409 2973 285408 /usr/src/fio/iolog.c 00:16:01.409 1 8 libtcmalloc_minimal.so 00:16:01.409 1 904 libcrypto.so 00:16:01.409 ----------------------------------------------------- 00:16:01.409 00:16:01.409 00:16:01.409 real 0m12.163s 00:16:01.409 user 0m27.458s 00:16:01.409 sys 0m20.117s 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.409 ************************************ 00:16:01.409 END TEST bdev_fio_rw_verify 00:16:01.409 ************************************ 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:01.409 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "67e15ecd-41a1-479a-9823-8a6bfc890ecc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67e15ecd-41a1-479a-9823-8a6bfc890ecc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "c401c15b-98ad-4815-8a49-0799e4da3e91"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c401c15b-98ad-4815-8a49-0799e4da3e91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "bf019948-1bb0-4169-af61-271d8edb50e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bf019948-1bb0-4169-af61-271d8edb50e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "5afb8f90-090f-41de-ad09-ece5e56d86de"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5afb8f90-090f-41de-ad09-ece5e56d86de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0dada14e-7e25-4265-83da-3956894e4b5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0dada14e-7e25-4265-83da-3956894e4b5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "b8ccc584-065c-4e29-986b-d66e34c1a51e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b8ccc584-065c-4e29-986b-d66e34c1a51e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:01.410 /home/vagrant/spdk_repo/spdk 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:01.410 00:16:01.410 real 0m12.340s 00:16:01.410 user 0m27.530s 00:16:01.410 sys 0m20.205s 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.410 ************************************ 00:16:01.410 END TEST bdev_fio 00:16:01.410 ************************************ 00:16:01.410 17:32:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:01.410 17:32:34 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:01.410 17:32:34 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:01.410 17:32:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:01.410 17:32:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.410 17:32:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.410 ************************************ 00:16:01.410 START TEST bdev_verify 00:16:01.410 ************************************ 00:16:01.410 17:32:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:01.410 [2024-12-07 17:32:34.528120] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:01.410 [2024-12-07 17:32:34.528267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72919 ] 00:16:01.410 [2024-12-07 17:32:34.694174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.671 [2024-12-07 17:32:34.840499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.672 [2024-12-07 17:32:34.840610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.240 Running I/O for 5 seconds... 00:16:04.567 22368.00 IOPS, 87.38 MiB/s [2024-12-07T17:32:38.894Z] 23264.00 IOPS, 90.88 MiB/s [2024-12-07T17:32:39.841Z] 23242.67 IOPS, 90.79 MiB/s [2024-12-07T17:32:40.784Z] 22456.00 IOPS, 87.72 MiB/s [2024-12-07T17:32:40.784Z] 22540.80 IOPS, 88.05 MiB/s 00:16:07.402 Latency(us) 00:16:07.402 [2024-12-07T17:32:40.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.402 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x0 length 0x80000 00:16:07.402 nvme0n1 : 5.03 1782.36 6.96 0.00 0.00 71680.48 11544.42 65334.35 00:16:07.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x80000 length 0x80000 00:16:07.402 nvme0n1 : 5.07 1666.58 6.51 0.00 0.00 76637.96 8267.62 89532.26 00:16:07.402 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x0 length 0x80000 00:16:07.402 nvme0n2 : 5.04 1779.46 6.95 0.00 0.00 71661.81 9275.86 69367.34 00:16:07.402 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x80000 length 0x80000 00:16:07.402 nvme0n2 : 5.07 1666.11 6.51 0.00 0.00 76464.16 11342.77 82676.18 00:16:07.402 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x0 length 0x80000 00:16:07.402 nvme0n3 : 5.07 1768.96 6.91 0.00 0.00 71952.82 14821.22 70980.53 00:16:07.402 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.402 Verification LBA range: start 0x80000 length 0x80000 00:16:07.402 nvme0n3 : 5.04 1677.06 6.55 0.00 0.00 75749.64 10838.65 68964.04 00:16:07.403 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0x0 length 0x20000 00:16:07.403 nvme1n1 : 5.07 1768.29 6.91 0.00 0.00 71850.43 12502.25 64931.05 00:16:07.403 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0x20000 length 0x20000 00:16:07.403 nvme1n1 : 5.08 1663.41 6.50 0.00 0.00 76183.35 9427.10 75416.81 00:16:07.403 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0x0 length 0xbd0bd 00:16:07.403 nvme2n1 : 5.09 2510.85 9.81 0.00 0.00 50460.44 4990.82 57268.38 00:16:07.403 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:07.403 nvme2n1 : 5.08 2437.92 9.52 0.00 0.00 51824.84 5797.42 72190.42 00:16:07.403 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0x0 length 0xa0000 00:16:07.403 nvme3n1 : 5.09 1811.67 7.08 0.00 0.00 69864.32 5898.24 67754.14 00:16:07.403 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:07.403 Verification LBA range: start 0xa0000 length 0xa0000 00:16:07.403 nvme3n1 : 5.08 1712.95 6.69 0.00 0.00 73755.88 5318.50 75820.11 00:16:07.403 [2024-12-07T17:32:40.785Z] =================================================================================================================== 00:16:07.403 [2024-12-07T17:32:40.785Z] Total : 22245.62 86.90 0.00 0.00 68512.91 4990.82 89532.26 00:16:08.347 00:16:08.347 real 0m6.944s 00:16:08.347 user 0m11.017s 00:16:08.347 sys 0m1.649s 00:16:08.347 17:32:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.347 ************************************ 00:16:08.347 END TEST bdev_verify 00:16:08.347 ************************************ 00:16:08.347 17:32:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:08.347 17:32:41 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:08.347 17:32:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:08.347 17:32:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.347 17:32:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.347 ************************************ 00:16:08.347 START TEST bdev_verify_big_io 00:16:08.347 ************************************ 00:16:08.347 17:32:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:08.347 [2024-12-07 17:32:41.546209] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:08.347 [2024-12-07 17:32:41.546353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73012 ] 00:16:08.347 [2024-12-07 17:32:41.708307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.608 [2024-12-07 17:32:41.856145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.608 [2024-12-07 17:32:41.856159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.180 Running I/O for 5 seconds... 00:16:15.298 1720.00 IOPS, 107.50 MiB/s [2024-12-07T17:32:48.940Z] 3604.00 IOPS, 225.25 MiB/s [2024-12-07T17:32:48.940Z] 3042.67 IOPS, 190.17 MiB/s 00:16:15.558 Latency(us) 00:16:15.558 [2024-12-07T17:32:48.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.558 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0x8000 00:16:15.558 nvme0n1 : 5.54 130.00 8.12 0.00 0.00 957837.84 23189.66 974369.08 00:16:15.558 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x8000 length 0x8000 00:16:15.558 nvme0n1 : 5.97 85.71 5.36 0.00 0.00 1395294.52 179871.11 1677721.60 00:16:15.558 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0x8000 00:16:15.558 nvme0n2 : 5.66 137.13 8.57 0.00 0.00 874203.90 6377.16 961463.53 00:16:15.558 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x8000 length 0x8000 00:16:15.558 nvme0n2 : 5.98 93.71 5.86 0.00 0.00 1230622.61 5822.62 1948738.17 00:16:15.558 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0x8000 00:16:15.558 nvme0n3 : 5.54 138.59 8.66 0.00 0.00 854433.36 4511.90 903388.55 00:16:15.558 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x8000 length 0x8000 00:16:15.558 nvme0n3 : 6.00 82.69 5.17 0.00 0.00 1334671.54 59284.87 2929560.02 00:16:15.558 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0x2000 00:16:15.558 nvme1n1 : 5.78 135.65 8.48 0.00 0.00 837724.59 6503.19 1284102.30 00:16:15.558 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x2000 length 0x2000 00:16:15.558 nvme1n1 : 6.04 82.14 5.13 0.00 0.00 1280004.27 27625.94 2413337.99 00:16:15.558 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0xbd0b 00:16:15.558 nvme2n1 : 5.78 171.69 10.73 0.00 0.00 646609.07 9427.10 693673.35 00:16:15.558 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:15.558 nvme2n1 : 6.21 162.35 10.15 0.00 0.00 623538.82 4285.05 2335904.69 00:16:15.558 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0x0 length 0xa000 00:16:15.558 nvme3n1 : 5.93 172.79 10.80 0.00 0.00 627737.53 316.65 916294.10 00:16:15.558 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:15.558 Verification LBA range: start 0xa000 length 0xa000 00:16:15.558 nvme3n1 : 6.42 245.39 15.34 0.00 0.00 395749.88 485.22 2400432.44 00:16:15.558 [2024-12-07T17:32:48.940Z] =================================================================================================================== 00:16:15.558 [2024-12-07T17:32:48.940Z] Total : 1637.83 102.36 0.00 0.00 815269.09 316.65 2929560.02 00:16:16.555 00:16:16.555 real 0m8.167s 00:16:16.555 user 0m14.976s 00:16:16.555 sys 0m0.491s 00:16:16.555 17:32:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.555 ************************************ 00:16:16.555 END TEST bdev_verify_big_io 00:16:16.555 ************************************ 00:16:16.555 17:32:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.555 17:32:49 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.555 17:32:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:16.555 17:32:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.555 17:32:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.555 ************************************ 00:16:16.555 START TEST bdev_write_zeroes 00:16:16.555 ************************************ 00:16:16.555 17:32:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:16.555 [2024-12-07 17:32:49.757702] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:16.555 [2024-12-07 17:32:49.757821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73127 ] 00:16:16.555 [2024-12-07 17:32:49.912606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.820 [2024-12-07 17:32:49.998949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.082 Running I/O for 1 seconds... 00:16:18.026 80192.00 IOPS, 313.25 MiB/s 00:16:18.026 Latency(us) 00:16:18.026 [2024-12-07T17:32:51.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.026 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme0n1 : 1.02 13072.86 51.07 0.00 0.00 9782.34 6956.90 19963.27 00:16:18.026 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme0n2 : 1.03 13095.39 51.15 0.00 0.00 9759.53 6956.90 17543.48 00:16:18.026 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme0n3 : 1.02 13056.79 51.00 0.00 0.00 9781.53 7007.31 20366.57 00:16:18.026 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme1n1 : 1.02 13041.79 50.94 0.00 0.00 9786.88 7007.31 20568.22 00:16:18.026 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme2n1 : 1.03 14141.71 55.24 0.00 0.00 9018.95 3705.30 18854.20 00:16:18.026 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:18.026 nvme3n1 : 1.03 13070.79 51.06 0.00 0.00 9752.66 5696.59 20467.40 00:16:18.026 [2024-12-07T17:32:51.408Z] =================================================================================================================== 00:16:18.026 [2024-12-07T17:32:51.408Z] Total : 79479.33 310.47 0.00 0.00 9637.94 3705.30 20568.22 00:16:18.969 00:16:18.970 real 0m2.308s 00:16:18.970 user 0m1.667s 00:16:18.970 sys 0m0.471s 00:16:18.970 17:32:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.970 17:32:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:18.970 ************************************ 00:16:18.970 END TEST bdev_write_zeroes 00:16:18.970 ************************************ 00:16:18.970 17:32:52 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.970 17:32:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:18.970 17:32:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.970 17:32:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.970 ************************************ 00:16:18.970 START TEST bdev_json_nonenclosed 00:16:18.970 ************************************ 00:16:18.970 17:32:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:18.970 [2024-12-07 17:32:52.129512] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:18.970 [2024-12-07 17:32:52.129648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73175 ] 00:16:18.970 [2024-12-07 17:32:52.288331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.231 [2024-12-07 17:32:52.387217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.231 [2024-12-07 17:32:52.387288] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:19.231 [2024-12-07 17:32:52.387304] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:19.231 [2024-12-07 17:32:52.387312] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:19.231 00:16:19.231 real 0m0.471s 00:16:19.231 user 0m0.272s 00:16:19.231 sys 0m0.094s 00:16:19.231 17:32:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.231 ************************************ 00:16:19.231 END TEST bdev_json_nonenclosed 00:16:19.231 17:32:52 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:19.231 ************************************ 00:16:19.231 17:32:52 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.231 17:32:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:19.231 17:32:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.231 17:32:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.231 ************************************ 00:16:19.231 START TEST bdev_json_nonarray 00:16:19.231 ************************************ 00:16:19.231 17:32:52 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:19.491 [2024-12-07 17:32:52.661549] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:19.492 [2024-12-07 17:32:52.661685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 00:16:19.492 [2024-12-07 17:32:52.818601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.753 [2024-12-07 17:32:52.904837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.753 [2024-12-07 17:32:52.904913] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:19.753 [2024-12-07 17:32:52.904928] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:19.753 [2024-12-07 17:32:52.904937] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:19.753 00:16:19.753 real 0m0.452s 00:16:19.753 user 0m0.255s 00:16:19.753 sys 0m0.091s 00:16:19.753 ************************************ 00:16:19.753 END TEST bdev_json_nonarray 00:16:19.753 ************************************ 00:16:19.753 17:32:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.753 17:32:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:19.753 17:32:53 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:20.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.537 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.537 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.537 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.537 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.537 00:16:24.537 real 0m54.389s 00:16:24.537 user 1m22.143s 00:16:24.537 sys 0m33.855s 00:16:24.537 17:32:57 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.537 17:32:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.537 ************************************ 00:16:24.537 END TEST blockdev_xnvme 00:16:24.537 ************************************ 00:16:24.537 17:32:57 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:24.537 17:32:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.537 17:32:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.537 17:32:57 -- common/autotest_common.sh@10 -- # set +x 00:16:24.537 ************************************ 00:16:24.537 START TEST ublk 00:16:24.537 ************************************ 00:16:24.537 17:32:57 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:24.798 * Looking for test storage... 00:16:24.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:24.798 17:32:57 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:24.798 17:32:57 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:24.798 17:32:57 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:24.798 17:32:58 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.798 17:32:58 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.798 17:32:58 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.798 17:32:58 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.798 17:32:58 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.798 17:32:58 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.798 17:32:58 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:24.798 17:32:58 ublk -- scripts/common.sh@345 -- # : 1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.798 17:32:58 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.798 17:32:58 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@353 -- # local d=1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.798 17:32:58 ublk -- scripts/common.sh@355 -- # echo 1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.798 17:32:58 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@353 -- # local d=2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.798 17:32:58 ublk -- scripts/common.sh@355 -- # echo 2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.798 17:32:58 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.798 17:32:58 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.798 17:32:58 ublk -- scripts/common.sh@368 -- # return 0 00:16:24.798 17:32:58 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.798 17:32:58 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:24.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.798 --rc genhtml_branch_coverage=1 00:16:24.798 --rc genhtml_function_coverage=1 00:16:24.798 --rc genhtml_legend=1 00:16:24.798 --rc geninfo_all_blocks=1 00:16:24.798 --rc geninfo_unexecuted_blocks=1 00:16:24.798 00:16:24.798 ' 00:16:24.798 17:32:58 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:24.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.798 --rc genhtml_branch_coverage=1 00:16:24.798 --rc genhtml_function_coverage=1 00:16:24.798 --rc genhtml_legend=1 00:16:24.798 --rc geninfo_all_blocks=1 00:16:24.798 --rc geninfo_unexecuted_blocks=1 00:16:24.798 00:16:24.798 ' 00:16:24.798 17:32:58 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:24.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.798 --rc genhtml_branch_coverage=1 00:16:24.798 --rc genhtml_function_coverage=1 00:16:24.798 --rc genhtml_legend=1 00:16:24.798 --rc geninfo_all_blocks=1 00:16:24.798 --rc geninfo_unexecuted_blocks=1 00:16:24.798 00:16:24.799 ' 00:16:24.799 17:32:58 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:24.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.799 --rc genhtml_branch_coverage=1 00:16:24.799 --rc genhtml_function_coverage=1 00:16:24.799 --rc genhtml_legend=1 00:16:24.799 --rc geninfo_all_blocks=1 00:16:24.799 --rc geninfo_unexecuted_blocks=1 00:16:24.799 00:16:24.799 ' 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:24.799 17:32:58 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:24.799 17:32:58 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:24.799 17:32:58 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:24.799 17:32:58 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:24.799 17:32:58 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:24.799 17:32:58 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:24.799 17:32:58 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:24.799 17:32:58 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:24.799 17:32:58 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:24.799 17:32:58 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.799 17:32:58 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.799 17:32:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.799 ************************************ 00:16:24.799 START TEST test_save_ublk_config 00:16:24.799 ************************************ 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73494 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73494 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73494 ']' 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.799 17:32:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.799 [2024-12-07 17:32:58.143448] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:24.799 [2024-12-07 17:32:58.144088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73494 ] 00:16:25.061 [2024-12-07 17:32:58.309330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.322 [2024-12-07 17:32:58.459065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:26.266 [2024-12-07 17:32:59.303015] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:26.266 [2024-12-07 17:32:59.303993] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:26.266 malloc0 00:16:26.266 [2024-12-07 17:32:59.383162] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:26.266 [2024-12-07 17:32:59.383286] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:26.266 [2024-12-07 17:32:59.383299] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:26.266 [2024-12-07 17:32:59.383308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:26.266 [2024-12-07 17:32:59.391222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:26.266 [2024-12-07 17:32:59.391257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:26.266 [2024-12-07 17:32:59.399023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:26.266 [2024-12-07 17:32:59.399167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:26.266 [2024-12-07 17:32:59.416028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:26.266 0 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.266 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:26.526 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.526 17:32:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:26.526 "subsystems": [ 00:16:26.526 { 00:16:26.526 "subsystem": "fsdev", 00:16:26.526 "config": [ 00:16:26.526 { 00:16:26.526 "method": "fsdev_set_opts", 00:16:26.526 "params": { 00:16:26.526 "fsdev_io_pool_size": 65535, 00:16:26.526 "fsdev_io_cache_size": 256 00:16:26.526 } 00:16:26.526 } 00:16:26.526 ] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "keyring", 00:16:26.526 "config": [] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "iobuf", 00:16:26.526 "config": [ 00:16:26.526 { 00:16:26.526 "method": "iobuf_set_options", 00:16:26.526 "params": { 00:16:26.526 "small_pool_count": 8192, 00:16:26.526 "large_pool_count": 1024, 00:16:26.526 "small_bufsize": 8192, 00:16:26.526 "large_bufsize": 135168, 00:16:26.526 "enable_numa": false 00:16:26.526 } 00:16:26.526 } 00:16:26.526 ] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "sock", 00:16:26.526 "config": [ 00:16:26.526 { 00:16:26.526 "method": "sock_set_default_impl", 00:16:26.526 "params": { 00:16:26.526 "impl_name": "posix" 00:16:26.526 } 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "method": "sock_impl_set_options", 00:16:26.526 "params": { 00:16:26.526 "impl_name": "ssl", 00:16:26.526 "recv_buf_size": 4096, 00:16:26.526 "send_buf_size": 4096, 00:16:26.526 "enable_recv_pipe": true, 00:16:26.526 "enable_quickack": false, 00:16:26.526 "enable_placement_id": 0, 00:16:26.526 "enable_zerocopy_send_server": true, 00:16:26.526 "enable_zerocopy_send_client": false, 00:16:26.526 "zerocopy_threshold": 0, 00:16:26.526 "tls_version": 0, 00:16:26.526 "enable_ktls": false 00:16:26.526 } 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "method": "sock_impl_set_options", 00:16:26.526 "params": { 00:16:26.526 "impl_name": "posix", 00:16:26.526 "recv_buf_size": 2097152, 00:16:26.526 "send_buf_size": 2097152, 00:16:26.526 "enable_recv_pipe": true, 00:16:26.526 "enable_quickack": false, 00:16:26.526 "enable_placement_id": 0, 00:16:26.526 "enable_zerocopy_send_server": true, 00:16:26.526 "enable_zerocopy_send_client": false, 00:16:26.526 "zerocopy_threshold": 0, 00:16:26.526 "tls_version": 0, 00:16:26.526 "enable_ktls": false 00:16:26.526 } 00:16:26.526 } 00:16:26.526 ] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "vmd", 00:16:26.526 "config": [] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "accel", 00:16:26.526 "config": [ 00:16:26.526 { 00:16:26.526 "method": "accel_set_options", 00:16:26.526 "params": { 00:16:26.526 "small_cache_size": 128, 00:16:26.526 "large_cache_size": 16, 00:16:26.526 "task_count": 2048, 00:16:26.526 "sequence_count": 2048, 00:16:26.526 "buf_count": 2048 00:16:26.526 } 00:16:26.526 } 00:16:26.526 ] 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "subsystem": "bdev", 00:16:26.526 "config": [ 00:16:26.526 { 00:16:26.526 "method": "bdev_set_options", 00:16:26.526 "params": { 00:16:26.526 "bdev_io_pool_size": 65535, 00:16:26.526 "bdev_io_cache_size": 256, 00:16:26.526 "bdev_auto_examine": true, 00:16:26.526 "iobuf_small_cache_size": 128, 00:16:26.526 "iobuf_large_cache_size": 16 00:16:26.526 } 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "method": "bdev_raid_set_options", 00:16:26.526 "params": { 00:16:26.526 "process_window_size_kb": 1024, 00:16:26.526 "process_max_bandwidth_mb_sec": 0 00:16:26.526 } 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "method": "bdev_iscsi_set_options", 00:16:26.526 "params": { 00:16:26.526 "timeout_sec": 30 00:16:26.526 } 00:16:26.526 }, 00:16:26.526 { 00:16:26.526 "method": "bdev_nvme_set_options", 00:16:26.526 "params": { 00:16:26.526 "action_on_timeout": "none", 00:16:26.526 "timeout_us": 0, 00:16:26.526 "timeout_admin_us": 0, 00:16:26.526 "keep_alive_timeout_ms": 10000, 00:16:26.526 "arbitration_burst": 0, 00:16:26.526 "low_priority_weight": 0, 00:16:26.526 "medium_priority_weight": 0, 00:16:26.526 "high_priority_weight": 0, 00:16:26.526 "nvme_adminq_poll_period_us": 10000, 00:16:26.526 "nvme_ioq_poll_period_us": 0, 00:16:26.526 "io_queue_requests": 0, 00:16:26.526 "delay_cmd_submit": true, 00:16:26.527 "transport_retry_count": 4, 00:16:26.527 "bdev_retry_count": 3, 00:16:26.527 "transport_ack_timeout": 0, 00:16:26.527 "ctrlr_loss_timeout_sec": 0, 00:16:26.527 "reconnect_delay_sec": 0, 00:16:26.527 "fast_io_fail_timeout_sec": 0, 00:16:26.527 "disable_auto_failback": false, 00:16:26.527 "generate_uuids": false, 00:16:26.527 "transport_tos": 0, 00:16:26.527 "nvme_error_stat": false, 00:16:26.527 "rdma_srq_size": 0, 00:16:26.527 "io_path_stat": false, 00:16:26.527 "allow_accel_sequence": false, 00:16:26.527 "rdma_max_cq_size": 0, 00:16:26.527 "rdma_cm_event_timeout_ms": 0, 00:16:26.527 "dhchap_digests": [ 00:16:26.527 "sha256", 00:16:26.527 "sha384", 00:16:26.527 "sha512" 00:16:26.527 ], 00:16:26.527 "dhchap_dhgroups": [ 00:16:26.527 "null", 00:16:26.527 "ffdhe2048", 00:16:26.527 "ffdhe3072", 00:16:26.527 "ffdhe4096", 00:16:26.527 "ffdhe6144", 00:16:26.527 "ffdhe8192" 00:16:26.527 ] 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "bdev_nvme_set_hotplug", 00:16:26.527 "params": { 00:16:26.527 "period_us": 100000, 00:16:26.527 "enable": false 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "bdev_malloc_create", 00:16:26.527 "params": { 00:16:26.527 "name": "malloc0", 00:16:26.527 "num_blocks": 8192, 00:16:26.527 "block_size": 4096, 00:16:26.527 "physical_block_size": 4096, 00:16:26.527 "uuid": "c91422d2-c3f0-422c-b7cb-bb722bc200a4", 00:16:26.527 "optimal_io_boundary": 0, 00:16:26.527 "md_size": 0, 00:16:26.527 "dif_type": 0, 00:16:26.527 "dif_is_head_of_md": false, 00:16:26.527 "dif_pi_format": 0 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "bdev_wait_for_examine" 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "scsi", 00:16:26.527 "config": null 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "scheduler", 00:16:26.527 "config": [ 00:16:26.527 { 00:16:26.527 "method": "framework_set_scheduler", 00:16:26.527 "params": { 00:16:26.527 "name": "static" 00:16:26.527 } 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "vhost_scsi", 00:16:26.527 "config": [] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "vhost_blk", 00:16:26.527 "config": [] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "ublk", 00:16:26.527 "config": [ 00:16:26.527 { 00:16:26.527 "method": "ublk_create_target", 00:16:26.527 "params": { 00:16:26.527 "cpumask": "1" 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "ublk_start_disk", 00:16:26.527 "params": { 00:16:26.527 "bdev_name": "malloc0", 00:16:26.527 "ublk_id": 0, 00:16:26.527 "num_queues": 1, 00:16:26.527 "queue_depth": 128 00:16:26.527 } 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "nbd", 00:16:26.527 "config": [] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "nvmf", 00:16:26.527 "config": [ 00:16:26.527 { 00:16:26.527 "method": "nvmf_set_config", 00:16:26.527 "params": { 00:16:26.527 "discovery_filter": "match_any", 00:16:26.527 "admin_cmd_passthru": { 00:16:26.527 "identify_ctrlr": false 00:16:26.527 }, 00:16:26.527 "dhchap_digests": [ 00:16:26.527 "sha256", 00:16:26.527 "sha384", 00:16:26.527 "sha512" 00:16:26.527 ], 00:16:26.527 "dhchap_dhgroups": [ 00:16:26.527 "null", 00:16:26.527 "ffdhe2048", 00:16:26.527 "ffdhe3072", 00:16:26.527 "ffdhe4096", 00:16:26.527 "ffdhe6144", 00:16:26.527 "ffdhe8192" 00:16:26.527 ] 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "nvmf_set_max_subsystems", 00:16:26.527 "params": { 00:16:26.527 "max_subsystems": 1024 00:16:26.527 } 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "method": "nvmf_set_crdt", 00:16:26.527 "params": { 00:16:26.527 "crdt1": 0, 00:16:26.527 "crdt2": 0, 00:16:26.527 "crdt3": 0 00:16:26.527 } 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 }, 00:16:26.527 { 00:16:26.527 "subsystem": "iscsi", 00:16:26.527 "config": [ 00:16:26.527 { 00:16:26.527 "method": "iscsi_set_options", 00:16:26.527 "params": { 00:16:26.527 "node_base": "iqn.2016-06.io.spdk", 00:16:26.527 "max_sessions": 128, 00:16:26.527 "max_connections_per_session": 2, 00:16:26.527 "max_queue_depth": 64, 00:16:26.527 "default_time2wait": 2, 00:16:26.527 "default_time2retain": 20, 00:16:26.527 "first_burst_length": 8192, 00:16:26.527 "immediate_data": true, 00:16:26.527 "allow_duplicated_isid": false, 00:16:26.527 "error_recovery_level": 0, 00:16:26.527 "nop_timeout": 60, 00:16:26.527 "nop_in_interval": 30, 00:16:26.527 "disable_chap": false, 00:16:26.527 "require_chap": false, 00:16:26.527 "mutual_chap": false, 00:16:26.527 "chap_group": 0, 00:16:26.527 "max_large_datain_per_connection": 64, 00:16:26.527 "max_r2t_per_connection": 4, 00:16:26.527 "pdu_pool_size": 36864, 00:16:26.527 "immediate_data_pool_size": 16384, 00:16:26.527 "data_out_pool_size": 2048 00:16:26.527 } 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 } 00:16:26.527 ] 00:16:26.527 }' 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73494 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73494 ']' 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73494 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73494 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.527 killing process with pid 73494 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73494' 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73494 00:16:26.527 17:32:59 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73494 00:16:27.926 [2024-12-07 17:33:00.916400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:27.926 [2024-12-07 17:33:00.960043] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:27.926 [2024-12-07 17:33:00.960221] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:27.926 [2024-12-07 17:33:00.961277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:27.926 [2024-12-07 17:33:00.961337] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:27.926 [2024-12-07 17:33:00.961353] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:27.926 [2024-12-07 17:33:00.961391] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:27.926 [2024-12-07 17:33:00.961585] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:29.313 17:33:02 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73560 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73560 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73560 ']' 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.314 17:33:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:29.314 "subsystems": [ 00:16:29.314 { 00:16:29.314 "subsystem": "fsdev", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "fsdev_set_opts", 00:16:29.314 "params": { 00:16:29.314 "fsdev_io_pool_size": 65535, 00:16:29.314 "fsdev_io_cache_size": 256 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "keyring", 00:16:29.314 "config": [] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "iobuf", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "iobuf_set_options", 00:16:29.314 "params": { 00:16:29.314 "small_pool_count": 8192, 00:16:29.314 "large_pool_count": 1024, 00:16:29.314 "small_bufsize": 8192, 00:16:29.314 "large_bufsize": 135168, 00:16:29.314 "enable_numa": false 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "sock", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "sock_set_default_impl", 00:16:29.314 "params": { 00:16:29.314 "impl_name": "posix" 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "sock_impl_set_options", 00:16:29.314 "params": { 00:16:29.314 "impl_name": "ssl", 00:16:29.314 "recv_buf_size": 4096, 00:16:29.314 "send_buf_size": 4096, 00:16:29.314 "enable_recv_pipe": true, 00:16:29.314 "enable_quickack": false, 00:16:29.314 "enable_placement_id": 0, 00:16:29.314 "enable_zerocopy_send_server": true, 00:16:29.314 "enable_zerocopy_send_client": false, 00:16:29.314 "zerocopy_threshold": 0, 00:16:29.314 "tls_version": 0, 00:16:29.314 "enable_ktls": false 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "sock_impl_set_options", 00:16:29.314 "params": { 00:16:29.314 "impl_name": "posix", 00:16:29.314 "recv_buf_size": 2097152, 00:16:29.314 "send_buf_size": 2097152, 00:16:29.314 "enable_recv_pipe": true, 00:16:29.314 "enable_quickack": false, 00:16:29.314 "enable_placement_id": 0, 00:16:29.314 "enable_zerocopy_send_server": true, 00:16:29.314 "enable_zerocopy_send_client": false, 00:16:29.314 "zerocopy_threshold": 0, 00:16:29.314 "tls_version": 0, 00:16:29.314 "enable_ktls": false 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "vmd", 00:16:29.314 "config": [] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "accel", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "accel_set_options", 00:16:29.314 "params": { 00:16:29.314 "small_cache_size": 128, 00:16:29.314 "large_cache_size": 16, 00:16:29.314 "task_count": 2048, 00:16:29.314 "sequence_count": 2048, 00:16:29.314 "buf_count": 2048 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "bdev", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "bdev_set_options", 00:16:29.314 "params": { 00:16:29.314 "bdev_io_pool_size": 65535, 00:16:29.314 "bdev_io_cache_size": 256, 00:16:29.314 "bdev_auto_examine": true, 00:16:29.314 "iobuf_small_cache_size": 128, 00:16:29.314 "iobuf_large_cache_size": 16 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_raid_set_options", 00:16:29.314 "params": { 00:16:29.314 "process_window_size_kb": 1024, 00:16:29.314 "process_max_bandwidth_mb_sec": 0 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_iscsi_set_options", 00:16:29.314 "params": { 00:16:29.314 "timeout_sec": 30 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_nvme_set_options", 00:16:29.314 "params": { 00:16:29.314 "action_on_timeout": "none", 00:16:29.314 "timeout_us": 0, 00:16:29.314 "timeout_admin_us": 0, 00:16:29.314 "keep_alive_timeout_ms": 10000, 00:16:29.314 "arbitration_burst": 0, 00:16:29.314 "low_priority_weight": 0, 00:16:29.314 "medium_priority_weight": 0, 00:16:29.314 "high_priority_weight": 0, 00:16:29.314 "nvme_adminq_poll_period_us": 10000, 00:16:29.314 "nvme_ioq_poll_period_us": 0, 00:16:29.314 "io_queue_requests": 0, 00:16:29.314 "delay_cmd_submit": true, 00:16:29.314 "transport_retry_count": 4, 00:16:29.314 "bdev_retry_count": 3, 00:16:29.314 "transport_ack_timeout": 0, 00:16:29.314 "ctrlr_loss_timeout_sec": 0, 00:16:29.314 "reconnect_delay_sec": 0, 00:16:29.314 "fast_io_fail_timeout_sec": 0, 00:16:29.314 "disable_auto_failback": false, 00:16:29.314 "generate_uuids": false, 00:16:29.314 "transport_tos": 0, 00:16:29.314 "nvme_error_stat": false, 00:16:29.314 "rdma_srq_size": 0, 00:16:29.314 "io_path_stat": false, 00:16:29.314 "allow_accel_sequence": false, 00:16:29.314 "rdma_max_cq_size": 0, 00:16:29.314 "rdma_cm_event_timeout_ms": 0, 00:16:29.314 "dhchap_digests": [ 00:16:29.314 "sha256", 00:16:29.314 "sha384", 00:16:29.314 "sha512" 00:16:29.314 ], 00:16:29.314 "dhchap_dhgroups": [ 00:16:29.314 "null", 00:16:29.314 "ffdhe2048", 00:16:29.314 "ffdhe3072", 00:16:29.314 "ffdhe4096", 00:16:29.314 "ffdhe6144", 00:16:29.314 "ffdhe8192" 00:16:29.314 ] 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_nvme_set_hotplug", 00:16:29.314 "params": { 00:16:29.314 "period_us": 100000, 00:16:29.314 "enable": false 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_malloc_create", 00:16:29.314 "params": { 00:16:29.314 "name": "malloc0", 00:16:29.314 "num_blocks": 8192, 00:16:29.314 "block_size": 4096, 00:16:29.314 "physical_block_size": 4096, 00:16:29.314 "uuid": "c91422d2-c3f0-422c-b7cb-bb722bc200a4", 00:16:29.314 "optimal_io_boundary": 0, 00:16:29.314 "md_size": 0, 00:16:29.314 "dif_type": 0, 00:16:29.314 "dif_is_head_of_md": false, 00:16:29.314 "dif_pi_format": 0 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "bdev_wait_for_examine" 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "scsi", 00:16:29.314 "config": null 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "scheduler", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "framework_set_scheduler", 00:16:29.314 "params": { 00:16:29.314 "name": "static" 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "vhost_scsi", 00:16:29.314 "config": [] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "vhost_blk", 00:16:29.314 "config": [] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "ublk", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "ublk_create_target", 00:16:29.314 "params": { 00:16:29.314 "cpumask": "1" 00:16:29.314 } 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "method": "ublk_start_disk", 00:16:29.314 "params": { 00:16:29.314 "bdev_name": "malloc0", 00:16:29.314 "ublk_id": 0, 00:16:29.314 "num_queues": 1, 00:16:29.314 "queue_depth": 128 00:16:29.314 } 00:16:29.314 } 00:16:29.314 ] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "nbd", 00:16:29.314 "config": [] 00:16:29.314 }, 00:16:29.314 { 00:16:29.314 "subsystem": "nvmf", 00:16:29.314 "config": [ 00:16:29.314 { 00:16:29.314 "method": "nvmf_set_config", 00:16:29.314 "params": { 00:16:29.314 "discovery_filter": "match_any", 00:16:29.314 "admin_cmd_passthru": { 00:16:29.314 "identify_ctrlr": false 00:16:29.314 }, 00:16:29.314 "dhchap_digests": [ 00:16:29.314 "sha256", 00:16:29.315 "sha384", 00:16:29.315 "sha512" 00:16:29.315 ], 00:16:29.315 "dhchap_dhgroups": [ 00:16:29.315 "null", 00:16:29.315 "ffdhe2048", 00:16:29.315 "ffdhe3072", 00:16:29.315 "ffdhe4096", 00:16:29.315 "ffdhe6144", 00:16:29.315 "ffdhe8192" 00:16:29.315 ] 00:16:29.315 } 00:16:29.315 }, 00:16:29.315 { 00:16:29.315 "method": "nvmf_set_max_subsystems", 00:16:29.315 "params": { 00:16:29.315 "max_subsystems": 1024 00:16:29.315 } 00:16:29.315 }, 00:16:29.315 { 00:16:29.315 "method": "nvmf_set_crdt", 00:16:29.315 "params": { 00:16:29.315 "crdt1": 0, 00:16:29.315 "crdt2": 0, 00:16:29.315 "crdt3": 0 00:16:29.315 } 00:16:29.315 } 00:16:29.315 ] 00:16:29.315 }, 00:16:29.315 { 00:16:29.315 "subsystem": "iscsi", 00:16:29.315 "config": [ 00:16:29.315 { 00:16:29.315 "method": "iscsi_set_options", 00:16:29.315 "params": { 00:16:29.315 "node_base": "iqn.2016-06.io.spdk", 00:16:29.315 "max_sessions": 128, 00:16:29.315 "max_connections_per_session": 2, 00:16:29.315 "max_queue_depth": 64, 00:16:29.315 "default_time2wait": 2, 00:16:29.315 "default_time2retain": 20, 00:16:29.315 "first_burst_length": 8192, 00:16:29.315 "immediate_data": true, 00:16:29.315 "allow_duplicated_isid": false, 00:16:29.315 "error_recovery_level": 0, 00:16:29.315 "nop_timeout": 60, 00:16:29.315 "nop_in_interval": 30, 00:16:29.315 "disable_chap": false, 00:16:29.315 "require_chap": false, 00:16:29.315 "mutual_chap": false, 00:16:29.315 "chap_group": 0, 00:16:29.315 "max_large_datain_per_connection": 64, 00:16:29.315 "max_r2t_per_connection": 4, 00:16:29.315 "pdu_pool_size": 36864, 00:16:29.315 "immediate_data_pool_size": 16384, 00:16:29.315 "data_out_pool_size": 2048 00:16:29.315 } 00:16:29.315 } 00:16:29.315 ] 00:16:29.315 } 00:16:29.315 ] 00:16:29.315 }' 00:16:29.315 [2024-12-07 17:33:02.551937] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:29.315 [2024-12-07 17:33:02.552103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73560 ] 00:16:29.576 [2024-12-07 17:33:02.715915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.576 [2024-12-07 17:33:02.841522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.521 [2024-12-07 17:33:03.726004] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:30.521 [2024-12-07 17:33:03.726942] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:30.521 [2024-12-07 17:33:03.734168] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:30.521 [2024-12-07 17:33:03.734264] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:30.521 [2024-12-07 17:33:03.734276] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:30.521 [2024-12-07 17:33:03.734285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:30.521 [2024-12-07 17:33:03.743103] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:30.521 [2024-12-07 17:33:03.743134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:30.521 [2024-12-07 17:33:03.750024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:30.521 [2024-12-07 17:33:03.750141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:30.521 [2024-12-07 17:33:03.767012] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73560 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73560 ']' 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73560 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73560 00:16:30.521 killing process with pid 73560 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73560' 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73560 00:16:30.521 17:33:03 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73560 00:16:31.915 [2024-12-07 17:33:05.241966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:31.915 [2024-12-07 17:33:05.289057] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:31.915 [2024-12-07 17:33:05.289165] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:32.175 [2024-12-07 17:33:05.297006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:32.175 [2024-12-07 17:33:05.297048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:32.175 [2024-12-07 17:33:05.297054] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:32.175 [2024-12-07 17:33:05.297075] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:32.175 [2024-12-07 17:33:05.297188] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.111 17:33:06 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:33.111 ************************************ 00:16:33.111 END TEST test_save_ublk_config 00:16:33.111 ************************************ 00:16:33.111 00:16:33.111 real 0m8.422s 00:16:33.111 user 0m5.696s 00:16:33.111 sys 0m3.374s 00:16:33.111 17:33:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.111 17:33:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:33.370 17:33:06 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73639 00:16:33.370 17:33:06 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.370 17:33:06 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73639 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@835 -- # '[' -z 73639 ']' 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.370 17:33:06 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.370 17:33:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:33.370 [2024-12-07 17:33:06.584781] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:33.370 [2024-12-07 17:33:06.585006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:16:33.370 [2024-12-07 17:33:06.732572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.629 [2024-12-07 17:33:06.811430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.629 [2024-12-07 17:33:06.811518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.195 17:33:07 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.195 17:33:07 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:34.195 17:33:07 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:34.195 17:33:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:34.195 17:33:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.195 17:33:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.196 ************************************ 00:16:34.196 START TEST test_create_ublk 00:16:34.196 ************************************ 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:34.196 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.196 [2024-12-07 17:33:07.445998] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:34.196 [2024-12-07 17:33:07.447641] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.196 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:34.196 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.196 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.453 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.453 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:34.453 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:34.453 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.453 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.453 [2024-12-07 17:33:07.619106] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:34.453 [2024-12-07 17:33:07.619410] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:34.453 [2024-12-07 17:33:07.619424] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:34.453 [2024-12-07 17:33:07.619430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.453 [2024-12-07 17:33:07.627017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.453 [2024-12-07 17:33:07.627036] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.453 [2024-12-07 17:33:07.635014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.453 [2024-12-07 17:33:07.635503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:34.453 [2024-12-07 17:33:07.657014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.453 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.453 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:34.453 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:34.453 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:34.454 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.454 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:34.454 17:33:07 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:34.454 { 00:16:34.454 "ublk_device": "/dev/ublkb0", 00:16:34.454 "id": 0, 00:16:34.454 "queue_depth": 512, 00:16:34.454 "num_queues": 4, 00:16:34.454 "bdev_name": "Malloc0" 00:16:34.454 } 00:16:34.454 ]' 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:34.454 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:34.711 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:34.711 17:33:07 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:34.711 17:33:07 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:34.711 fio: verification read phase will never start because write phase uses all of runtime 00:16:34.711 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:34.711 fio-3.35 00:16:34.711 Starting 1 process 00:16:46.907 00:16:46.907 fio_test: (groupid=0, jobs=1): err= 0: pid=73678: Sat Dec 7 17:33:18 2024 00:16:46.907 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(575MiB/10001msec); 0 zone resets 00:16:46.907 clat (usec): min=33, max=10757, avg=67.19, stdev=122.11 00:16:46.907 lat (usec): min=34, max=10774, avg=67.61, stdev=122.13 00:16:46.907 clat percentiles (usec): 00:16:46.907 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 58], 00:16:46.907 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:16:46.907 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 70], 95.00th=[ 73], 00:16:46.907 | 99.00th=[ 85], 99.50th=[ 93], 99.90th=[ 2737], 99.95th=[ 3425], 00:16:46.907 | 99.99th=[ 3785] 00:16:46.907 bw ( KiB/s): min=26880, max=61888, per=100.00%, avg=58943.58, stdev=7919.04, samples=19 00:16:46.907 iops : min= 6720, max=15472, avg=14735.89, stdev=1979.76, samples=19 00:16:46.907 lat (usec) : 50=0.07%, 100=99.52%, 250=0.19%, 500=0.01%, 750=0.01% 00:16:46.907 lat (usec) : 1000=0.01% 00:16:46.907 lat (msec) : 2=0.06%, 4=0.12%, 10=0.01%, 20=0.01% 00:16:46.907 cpu : usr=2.46%, sys=11.50%, ctx=147218, majf=0, minf=795 00:16:46.907 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.907 issued rwts: total=0,147215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.907 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.907 00:16:46.907 Run status group 0 (all jobs): 00:16:46.907 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=575MiB (603MB), run=10001-10001msec 00:16:46.907 00:16:46.907 Disk stats (read/write): 00:16:46.907 ublkb0: ios=0/145751, merge=0/0, ticks=0/8583, in_queue=8583, util=99.09% 00:16:46.907 17:33:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 [2024-12-07 17:33:18.074853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:46.907 [2024-12-07 17:33:18.119037] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:46.907 [2024-12-07 17:33:18.119729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:46.907 [2024-12-07 17:33:18.127010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:46.907 [2024-12-07 17:33:18.127245] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:46.907 [2024-12-07 17:33:18.127261] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 [2024-12-07 17:33:18.142060] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:46.907 request: 00:16:46.907 { 00:16:46.907 "ublk_id": 0, 00:16:46.907 "method": "ublk_stop_disk", 00:16:46.907 "req_id": 1 00:16:46.907 } 00:16:46.907 Got JSON-RPC error response 00:16:46.907 response: 00:16:46.907 { 00:16:46.907 "code": -19, 00:16:46.907 "message": "No such device" 00:16:46.907 } 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:46.907 17:33:18 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 [2024-12-07 17:33:18.158061] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:46.907 [2024-12-07 17:33:18.165997] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:46.907 [2024-12-07 17:33:18.166030] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:46.907 ************************************ 00:16:46.907 END TEST test_create_ublk 00:16:46.907 ************************************ 00:16:46.907 17:33:18 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:46.907 00:16:46.907 real 0m11.182s 00:16:46.907 user 0m0.541s 00:16:46.907 sys 0m1.229s 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:18 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:46.907 17:33:18 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.907 17:33:18 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.907 17:33:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 ************************************ 00:16:46.907 START TEST test_create_multi_ublk 00:16:46.907 ************************************ 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 [2024-12-07 17:33:18.665997] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:46.907 [2024-12-07 17:33:18.667540] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 [2024-12-07 17:33:18.882107] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:46.907 [2024-12-07 17:33:18.882406] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:46.907 [2024-12-07 17:33:18.882418] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:46.907 [2024-12-07 17:33:18.882426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.907 [2024-12-07 17:33:18.906008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.907 [2024-12-07 17:33:18.906029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.907 [2024-12-07 17:33:18.918004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.907 [2024-12-07 17:33:18.918499] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:46.907 [2024-12-07 17:33:18.954002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.907 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.907 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:46.907 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:46.907 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.907 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 [2024-12-07 17:33:19.170103] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:46.908 [2024-12-07 17:33:19.170402] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:46.908 [2024-12-07 17:33:19.170416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:46.908 [2024-12-07 17:33:19.170421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.908 [2024-12-07 17:33:19.178023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.908 [2024-12-07 17:33:19.178040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.908 [2024-12-07 17:33:19.186015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.908 [2024-12-07 17:33:19.186498] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:46.908 [2024-12-07 17:33:19.203008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 [2024-12-07 17:33:19.362084] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:46.908 [2024-12-07 17:33:19.362386] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:46.908 [2024-12-07 17:33:19.362398] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:46.908 [2024-12-07 17:33:19.362404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.908 [2024-12-07 17:33:19.370022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.908 [2024-12-07 17:33:19.370043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.908 [2024-12-07 17:33:19.378006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.908 [2024-12-07 17:33:19.378501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:46.908 [2024-12-07 17:33:19.387022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 [2024-12-07 17:33:19.546102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:46.908 [2024-12-07 17:33:19.546398] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:46.908 [2024-12-07 17:33:19.546411] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:46.908 [2024-12-07 17:33:19.546416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.908 [2024-12-07 17:33:19.554021] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.908 [2024-12-07 17:33:19.554038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.908 [2024-12-07 17:33:19.562004] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.908 [2024-12-07 17:33:19.562497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:46.908 [2024-12-07 17:33:19.571030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:46.908 { 00:16:46.908 "ublk_device": "/dev/ublkb0", 00:16:46.908 "id": 0, 00:16:46.908 "queue_depth": 512, 00:16:46.908 "num_queues": 4, 00:16:46.908 "bdev_name": "Malloc0" 00:16:46.908 }, 00:16:46.908 { 00:16:46.908 "ublk_device": "/dev/ublkb1", 00:16:46.908 "id": 1, 00:16:46.908 "queue_depth": 512, 00:16:46.908 "num_queues": 4, 00:16:46.908 "bdev_name": "Malloc1" 00:16:46.908 }, 00:16:46.908 { 00:16:46.908 "ublk_device": "/dev/ublkb2", 00:16:46.908 "id": 2, 00:16:46.908 "queue_depth": 512, 00:16:46.908 "num_queues": 4, 00:16:46.908 "bdev_name": "Malloc2" 00:16:46.908 }, 00:16:46.908 { 00:16:46.908 "ublk_device": "/dev/ublkb3", 00:16:46.908 "id": 3, 00:16:46.908 "queue_depth": 512, 00:16:46.908 "num_queues": 4, 00:16:46.908 "bdev_name": "Malloc3" 00:16:46.908 } 00:16:46.908 ]' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:46.908 17:33:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.908 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.909 [2024-12-07 17:33:20.242075] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.166 [2024-12-07 17:33:20.289060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.166 [2024-12-07 17:33:20.289957] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.166 [2024-12-07 17:33:20.298048] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.166 [2024-12-07 17:33:20.298299] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.166 [2024-12-07 17:33:20.298315] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.166 [2024-12-07 17:33:20.313079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.166 [2024-12-07 17:33:20.350042] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.166 [2024-12-07 17:33:20.350853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.166 [2024-12-07 17:33:20.362033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.166 [2024-12-07 17:33:20.362280] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:47.166 [2024-12-07 17:33:20.362287] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.166 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.166 [2024-12-07 17:33:20.377081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.166 [2024-12-07 17:33:20.425040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.166 [2024-12-07 17:33:20.425800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.166 [2024-12-07 17:33:20.437010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.166 [2024-12-07 17:33:20.437235] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:47.167 [2024-12-07 17:33:20.437245] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.167 [2024-12-07 17:33:20.445064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.167 [2024-12-07 17:33:20.492031] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.167 [2024-12-07 17:33:20.492691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.167 [2024-12-07 17:33:20.508010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.167 [2024-12-07 17:33:20.508245] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:47.167 [2024-12-07 17:33:20.508258] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.167 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:47.424 [2024-12-07 17:33:20.700060] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.424 [2024-12-07 17:33:20.707998] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:47.424 [2024-12-07 17:33:20.708028] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:47.424 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:47.424 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.424 17:33:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:47.424 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.424 17:33:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.990 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:47.990 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:47.990 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.990 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.248 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.248 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.248 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:48.248 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.248 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:48.507 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:48.766 ************************************ 00:16:48.766 END TEST test_create_multi_ublk 00:16:48.766 ************************************ 00:16:48.766 17:33:21 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:48.766 00:16:48.766 real 0m3.251s 00:16:48.766 user 0m0.820s 00:16:48.766 sys 0m0.139s 00:16:48.766 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.766 17:33:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.766 17:33:21 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:48.766 17:33:21 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:48.766 17:33:21 ublk -- ublk/ublk.sh@130 -- # killprocess 73639 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@954 -- # '[' -z 73639 ']' 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@958 -- # kill -0 73639 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@959 -- # uname 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73639 00:16:48.766 killing process with pid 73639 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73639' 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@973 -- # kill 73639 00:16:48.766 17:33:21 ublk -- common/autotest_common.sh@978 -- # wait 73639 00:16:49.332 [2024-12-07 17:33:22.493169] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:49.332 [2024-12-07 17:33:22.493219] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:49.899 00:16:49.899 real 0m25.282s 00:16:49.899 user 0m35.816s 00:16:49.899 sys 0m9.188s 00:16:49.899 ************************************ 00:16:49.899 END TEST ublk 00:16:49.899 ************************************ 00:16:49.899 17:33:23 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.899 17:33:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.899 17:33:23 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:49.899 17:33:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.899 17:33:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.899 17:33:23 -- common/autotest_common.sh@10 -- # set +x 00:16:49.899 ************************************ 00:16:49.899 START TEST ublk_recovery 00:16:49.899 ************************************ 00:16:49.899 17:33:23 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:49.899 * Looking for test storage... 00:16:49.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:49.899 17:33:23 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:49.899 17:33:23 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:49.899 17:33:23 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.160 17:33:23 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.160 --rc genhtml_branch_coverage=1 00:16:50.160 --rc genhtml_function_coverage=1 00:16:50.160 --rc genhtml_legend=1 00:16:50.160 --rc geninfo_all_blocks=1 00:16:50.160 --rc geninfo_unexecuted_blocks=1 00:16:50.160 00:16:50.160 ' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.160 --rc genhtml_branch_coverage=1 00:16:50.160 --rc genhtml_function_coverage=1 00:16:50.160 --rc genhtml_legend=1 00:16:50.160 --rc geninfo_all_blocks=1 00:16:50.160 --rc geninfo_unexecuted_blocks=1 00:16:50.160 00:16:50.160 ' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.160 --rc genhtml_branch_coverage=1 00:16:50.160 --rc genhtml_function_coverage=1 00:16:50.160 --rc genhtml_legend=1 00:16:50.160 --rc geninfo_all_blocks=1 00:16:50.160 --rc geninfo_unexecuted_blocks=1 00:16:50.160 00:16:50.160 ' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:50.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.160 --rc genhtml_branch_coverage=1 00:16:50.160 --rc genhtml_function_coverage=1 00:16:50.160 --rc genhtml_legend=1 00:16:50.160 --rc geninfo_all_blocks=1 00:16:50.160 --rc geninfo_unexecuted_blocks=1 00:16:50.160 00:16:50.160 ' 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:50.160 17:33:23 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74027 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.160 17:33:23 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74027 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74027 ']' 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.160 17:33:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.160 [2024-12-07 17:33:23.442721] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:16:50.160 [2024-12-07 17:33:23.443003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74027 ] 00:16:50.420 [2024-12-07 17:33:23.599448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:50.420 [2024-12-07 17:33:23.725521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.420 [2024-12-07 17:33:23.725628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:50.992 17:33:24 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 [2024-12-07 17:33:24.342003] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.992 [2024-12-07 17:33:24.343898] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.992 17:33:24 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.992 17:33:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 malloc0 00:16:51.253 17:33:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 17:33:24 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:51.253 17:33:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.253 17:33:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.253 [2024-12-07 17:33:24.454133] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:51.253 [2024-12-07 17:33:24.454227] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:51.253 [2024-12-07 17:33:24.454237] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.253 [2024-12-07 17:33:24.454243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.253 [2024-12-07 17:33:24.463107] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.253 [2024-12-07 17:33:24.463129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.253 [2024-12-07 17:33:24.470018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.253 [2024-12-07 17:33:24.470176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:51.253 [2024-12-07 17:33:24.486014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.253 1 00:16:51.253 17:33:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.253 17:33:24 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:52.206 17:33:25 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74062 00:16:52.206 17:33:25 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:52.206 17:33:25 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:52.465 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:52.465 fio-3.35 00:16:52.465 Starting 1 process 00:16:57.791 17:33:30 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74027 00:16:57.791 17:33:30 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:03.083 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74027 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:03.083 17:33:35 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74172 00:17:03.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.083 17:33:35 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.083 17:33:35 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74172 00:17:03.083 17:33:35 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74172 ']' 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.083 17:33:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.083 [2024-12-07 17:33:35.584195] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:03.083 [2024-12-07 17:33:35.584315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74172 ] 00:17:03.083 [2024-12-07 17:33:35.740881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:03.083 [2024-12-07 17:33:35.835748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.083 [2024-12-07 17:33:35.835845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.083 17:33:36 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.083 17:33:36 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:03.083 17:33:36 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:03.083 17:33:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.083 17:33:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.083 [2024-12-07 17:33:36.411997] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:03.084 [2024-12-07 17:33:36.413798] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:03.084 17:33:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.084 17:33:36 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:03.084 17:33:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.084 17:33:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.343 malloc0 00:17:03.343 17:33:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.343 17:33:36 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:03.343 17:33:36 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.343 17:33:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.343 [2024-12-07 17:33:36.508118] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:03.343 [2024-12-07 17:33:36.508146] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:03.343 [2024-12-07 17:33:36.508155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:03.343 1 00:17:03.343 17:33:36 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.343 17:33:36 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74062 00:17:03.343 [2024-12-07 17:33:36.517001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:03.343 [2024-12-07 17:33:36.517023] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:04.284 [2024-12-07 17:33:37.517055] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:04.284 [2024-12-07 17:33:37.524011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:04.284 [2024-12-07 17:33:37.524028] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:05.220 [2024-12-07 17:33:38.526013] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:05.220 [2024-12-07 17:33:38.533001] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:05.220 [2024-12-07 17:33:38.533019] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:06.155 [2024-12-07 17:33:39.533040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:06.414 [2024-12-07 17:33:39.542009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:06.414 [2024-12-07 17:33:39.542025] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:06.414 [2024-12-07 17:33:39.542033] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:06.414 [2024-12-07 17:33:39.542107] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:28.335 [2024-12-07 17:34:00.854010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:28.335 [2024-12-07 17:34:00.860601] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:28.335 [2024-12-07 17:34:00.868212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:28.335 [2024-12-07 17:34:00.868232] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:55.012 00:17:55.012 fio_test: (groupid=0, jobs=1): err= 0: pid=74065: Sat Dec 7 17:34:25 2024 00:17:55.012 read: IOPS=13.4k, BW=52.3MiB/s (54.8MB/s)(3137MiB/60001msec) 00:17:55.012 slat (nsec): min=1233, max=1581.4k, avg=5579.76, stdev=2281.66 00:17:55.012 clat (usec): min=684, max=30377k, avg=4619.66, stdev=264716.64 00:17:55.012 lat (usec): min=690, max=30377k, avg=4625.24, stdev=264716.64 00:17:55.012 clat percentiles (usec): 00:17:55.012 | 1.00th=[ 1926], 5.00th=[ 2073], 10.00th=[ 2114], 20.00th=[ 2114], 00:17:55.012 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2180], 00:17:55.012 | 70.00th=[ 2212], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3261], 00:17:55.012 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 8291], 99.95th=[10159], 00:17:55.012 | 99.99th=[13042] 00:17:55.012 bw ( KiB/s): min=36072, max=112192, per=100.00%, avg=107126.24, stdev=13462.36, samples=59 00:17:55.012 iops : min= 9018, max=28048, avg=26781.56, stdev=3365.59, samples=59 00:17:55.012 write: IOPS=13.4k, BW=52.2MiB/s (54.7MB/s)(3132MiB/60001msec); 0 zone resets 00:17:55.012 slat (nsec): min=1529, max=154963, avg=5820.35, stdev=1473.85 00:17:55.012 clat (usec): min=689, max=30377k, avg=4940.04, stdev=277656.15 00:17:55.012 lat (usec): min=694, max=30377k, avg=4945.86, stdev=277656.15 00:17:55.012 clat percentiles (usec): 00:17:55.012 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2212], 20.00th=[ 2245], 00:17:55.012 | 30.00th=[ 2245], 40.00th=[ 2278], 50.00th=[ 2278], 60.00th=[ 2311], 00:17:55.012 | 70.00th=[ 2311], 80.00th=[ 2343], 90.00th=[ 2409], 95.00th=[ 3228], 00:17:55.012 | 99.00th=[ 5473], 99.50th=[ 5997], 99.90th=[ 8455], 99.95th=[10159], 00:17:55.012 | 99.99th=[13173] 00:17:55.012 bw ( KiB/s): min=36216, max=112408, per=100.00%, avg=106959.46, stdev=13259.95, samples=59 00:17:55.012 iops : min= 9054, max=28102, avg=26739.86, stdev=3314.99, samples=59 00:17:55.012 lat (usec) : 750=0.01%, 1000=0.01% 00:17:55.012 lat (msec) : 2=1.48%, 4=95.54%, 10=2.93%, 20=0.04%, >=2000=0.01% 00:17:55.012 cpu : usr=2.85%, sys=15.61%, ctx=52566, majf=0, minf=13 00:17:55.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:55.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.012 issued rwts: total=803021,801716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.012 00:17:55.012 Run status group 0 (all jobs): 00:17:55.012 READ: bw=52.3MiB/s (54.8MB/s), 52.3MiB/s-52.3MiB/s (54.8MB/s-54.8MB/s), io=3137MiB (3289MB), run=60001-60001msec 00:17:55.012 WRITE: bw=52.2MiB/s (54.7MB/s), 52.2MiB/s-52.2MiB/s (54.7MB/s-54.7MB/s), io=3132MiB (3284MB), run=60001-60001msec 00:17:55.012 00:17:55.012 Disk stats (read/write): 00:17:55.012 ublkb1: ios=800024/798777, merge=0/0, ticks=3658407/3839777, in_queue=7498185, util=99.88% 00:17:55.012 17:34:25 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 [2024-12-07 17:34:25.746644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:55.012 [2024-12-07 17:34:25.782126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:55.012 [2024-12-07 17:34:25.782284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:55.012 [2024-12-07 17:34:25.788008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:55.012 [2024-12-07 17:34:25.788100] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:55.012 [2024-12-07 17:34:25.788107] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.012 17:34:25 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 [2024-12-07 17:34:25.803112] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:55.012 [2024-12-07 17:34:25.812000] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:55.012 [2024-12-07 17:34:25.812032] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.012 17:34:25 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:55.012 17:34:25 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:55.012 17:34:25 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74172 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74172 ']' 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74172 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74172 00:17:55.012 killing process with pid 74172 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74172' 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74172 00:17:55.012 17:34:25 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74172 00:17:55.012 [2024-12-07 17:34:26.898955] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:55.012 [2024-12-07 17:34:26.899012] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:55.012 00:17:55.012 real 1m4.449s 00:17:55.012 user 1m45.861s 00:17:55.012 sys 0m23.277s 00:17:55.012 17:34:27 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.012 17:34:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 ************************************ 00:17:55.012 END TEST ublk_recovery 00:17:55.012 ************************************ 00:17:55.012 17:34:27 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:55.012 17:34:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:55.012 17:34:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:55.012 17:34:27 -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 17:34:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:55.012 17:34:27 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:55.012 17:34:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:55.012 17:34:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.012 17:34:27 -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 ************************************ 00:17:55.012 START TEST ftl 00:17:55.012 ************************************ 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:55.012 * Looking for test storage... 00:17:55.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.012 17:34:27 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.012 17:34:27 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.012 17:34:27 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.012 17:34:27 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.012 17:34:27 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.012 17:34:27 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:55.012 17:34:27 ftl -- scripts/common.sh@345 -- # : 1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.012 17:34:27 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.012 17:34:27 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@353 -- # local d=1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.012 17:34:27 ftl -- scripts/common.sh@355 -- # echo 1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.012 17:34:27 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@353 -- # local d=2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.012 17:34:27 ftl -- scripts/common.sh@355 -- # echo 2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.012 17:34:27 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.012 17:34:27 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.012 17:34:27 ftl -- scripts/common.sh@368 -- # return 0 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:55.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.012 --rc genhtml_branch_coverage=1 00:17:55.012 --rc genhtml_function_coverage=1 00:17:55.012 --rc genhtml_legend=1 00:17:55.012 --rc geninfo_all_blocks=1 00:17:55.012 --rc geninfo_unexecuted_blocks=1 00:17:55.012 00:17:55.012 ' 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:55.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.012 --rc genhtml_branch_coverage=1 00:17:55.012 --rc genhtml_function_coverage=1 00:17:55.012 --rc genhtml_legend=1 00:17:55.012 --rc geninfo_all_blocks=1 00:17:55.012 --rc geninfo_unexecuted_blocks=1 00:17:55.012 00:17:55.012 ' 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:55.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.012 --rc genhtml_branch_coverage=1 00:17:55.012 --rc genhtml_function_coverage=1 00:17:55.012 --rc genhtml_legend=1 00:17:55.012 --rc geninfo_all_blocks=1 00:17:55.012 --rc geninfo_unexecuted_blocks=1 00:17:55.012 00:17:55.012 ' 00:17:55.012 17:34:27 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:55.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.012 --rc genhtml_branch_coverage=1 00:17:55.012 --rc genhtml_function_coverage=1 00:17:55.012 --rc genhtml_legend=1 00:17:55.012 --rc geninfo_all_blocks=1 00:17:55.012 --rc geninfo_unexecuted_blocks=1 00:17:55.012 00:17:55.012 ' 00:17:55.012 17:34:27 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:55.012 17:34:27 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:55.012 17:34:27 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.012 17:34:27 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.012 17:34:27 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:55.012 17:34:27 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:55.012 17:34:27 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.012 17:34:27 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.012 17:34:27 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.012 17:34:27 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:55.012 17:34:27 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:55.012 17:34:27 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:55.012 17:34:27 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:55.012 17:34:27 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.012 17:34:27 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:55.012 17:34:27 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:55.012 17:34:27 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:55.012 17:34:27 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:55.012 17:34:27 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:55.012 17:34:27 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:55.012 17:34:27 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:55.012 17:34:27 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:55.012 17:34:27 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:55.012 17:34:27 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:55.012 17:34:27 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.012 17:34:27 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:55.013 17:34:27 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:55.013 17:34:27 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:55.013 17:34:27 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:55.013 17:34:27 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:55.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:55.013 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.013 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.013 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.013 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:55.013 17:34:28 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74977 00:17:55.013 17:34:28 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74977 00:17:55.013 17:34:28 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@835 -- # '[' -z 74977 ']' 00:17:55.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.013 17:34:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:55.271 [2024-12-07 17:34:28.476810] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:55.271 [2024-12-07 17:34:28.476968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74977 ] 00:17:55.271 [2024-12-07 17:34:28.636034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.532 [2024-12-07 17:34:28.724667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.105 17:34:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.105 17:34:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:56.105 17:34:29 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:56.366 17:34:29 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:56.940 17:34:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:56.940 17:34:30 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@50 -- # break 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:57.513 17:34:30 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:57.775 17:34:31 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:57.775 17:34:31 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:57.775 17:34:31 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:57.775 17:34:31 ftl -- ftl/ftl.sh@63 -- # break 00:17:57.775 17:34:31 ftl -- ftl/ftl.sh@66 -- # killprocess 74977 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@954 -- # '[' -z 74977 ']' 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@958 -- # kill -0 74977 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@959 -- # uname 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74977 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.775 killing process with pid 74977 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74977' 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@973 -- # kill 74977 00:17:57.775 17:34:31 ftl -- common/autotest_common.sh@978 -- # wait 74977 00:17:59.163 17:34:32 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:59.163 17:34:32 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:59.163 17:34:32 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:59.163 17:34:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.163 17:34:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:59.163 ************************************ 00:17:59.163 START TEST ftl_fio_basic 00:17:59.163 ************************************ 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:59.163 * Looking for test storage... 00:17:59.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.163 --rc genhtml_branch_coverage=1 00:17:59.163 --rc genhtml_function_coverage=1 00:17:59.163 --rc genhtml_legend=1 00:17:59.163 --rc geninfo_all_blocks=1 00:17:59.163 --rc geninfo_unexecuted_blocks=1 00:17:59.163 00:17:59.163 ' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.163 --rc genhtml_branch_coverage=1 00:17:59.163 --rc genhtml_function_coverage=1 00:17:59.163 --rc genhtml_legend=1 00:17:59.163 --rc geninfo_all_blocks=1 00:17:59.163 --rc geninfo_unexecuted_blocks=1 00:17:59.163 00:17:59.163 ' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.163 --rc genhtml_branch_coverage=1 00:17:59.163 --rc genhtml_function_coverage=1 00:17:59.163 --rc genhtml_legend=1 00:17:59.163 --rc geninfo_all_blocks=1 00:17:59.163 --rc geninfo_unexecuted_blocks=1 00:17:59.163 00:17:59.163 ' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.163 --rc genhtml_branch_coverage=1 00:17:59.163 --rc genhtml_function_coverage=1 00:17:59.163 --rc genhtml_legend=1 00:17:59.163 --rc geninfo_all_blocks=1 00:17:59.163 --rc geninfo_unexecuted_blocks=1 00:17:59.163 00:17:59.163 ' 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:59.163 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75109 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75109 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75109 ']' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.426 17:34:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:59.426 [2024-12-07 17:34:32.632069] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:17:59.426 [2024-12-07 17:34:32.632197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75109 ] 00:17:59.426 [2024-12-07 17:34:32.787620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.689 [2024-12-07 17:34:32.891935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.689 [2024-12-07 17:34:32.892207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.689 [2024-12-07 17:34:32.892216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:00.263 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:00.524 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:00.524 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:00.524 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:00.525 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:00.525 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:00.525 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:00.525 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:00.525 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:00.787 { 00:18:00.787 "name": "nvme0n1", 00:18:00.787 "aliases": [ 00:18:00.787 "f0fda4fa-50e7-4256-b512-22ad79e766d0" 00:18:00.787 ], 00:18:00.787 "product_name": "NVMe disk", 00:18:00.787 "block_size": 4096, 00:18:00.787 "num_blocks": 1310720, 00:18:00.787 "uuid": "f0fda4fa-50e7-4256-b512-22ad79e766d0", 00:18:00.787 "numa_id": -1, 00:18:00.787 "assigned_rate_limits": { 00:18:00.787 "rw_ios_per_sec": 0, 00:18:00.787 "rw_mbytes_per_sec": 0, 00:18:00.787 "r_mbytes_per_sec": 0, 00:18:00.787 "w_mbytes_per_sec": 0 00:18:00.787 }, 00:18:00.787 "claimed": false, 00:18:00.787 "zoned": false, 00:18:00.787 "supported_io_types": { 00:18:00.787 "read": true, 00:18:00.787 "write": true, 00:18:00.787 "unmap": true, 00:18:00.787 "flush": true, 00:18:00.787 "reset": true, 00:18:00.787 "nvme_admin": true, 00:18:00.787 "nvme_io": true, 00:18:00.787 "nvme_io_md": false, 00:18:00.787 "write_zeroes": true, 00:18:00.787 "zcopy": false, 00:18:00.787 "get_zone_info": false, 00:18:00.787 "zone_management": false, 00:18:00.787 "zone_append": false, 00:18:00.787 "compare": true, 00:18:00.787 "compare_and_write": false, 00:18:00.787 "abort": true, 00:18:00.787 "seek_hole": false, 00:18:00.787 "seek_data": false, 00:18:00.787 "copy": true, 00:18:00.787 "nvme_iov_md": false 00:18:00.787 }, 00:18:00.787 "driver_specific": { 00:18:00.787 "nvme": [ 00:18:00.787 { 00:18:00.787 "pci_address": "0000:00:11.0", 00:18:00.787 "trid": { 00:18:00.787 "trtype": "PCIe", 00:18:00.787 "traddr": "0000:00:11.0" 00:18:00.787 }, 00:18:00.787 "ctrlr_data": { 00:18:00.787 "cntlid": 0, 00:18:00.787 "vendor_id": "0x1b36", 00:18:00.787 "model_number": "QEMU NVMe Ctrl", 00:18:00.787 "serial_number": "12341", 00:18:00.787 "firmware_revision": "8.0.0", 00:18:00.787 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.787 "oacs": { 00:18:00.787 "security": 0, 00:18:00.787 "format": 1, 00:18:00.787 "firmware": 0, 00:18:00.787 "ns_manage": 1 00:18:00.787 }, 00:18:00.787 "multi_ctrlr": false, 00:18:00.787 "ana_reporting": false 00:18:00.787 }, 00:18:00.787 "vs": { 00:18:00.787 "nvme_version": "1.4" 00:18:00.787 }, 00:18:00.787 "ns_data": { 00:18:00.787 "id": 1, 00:18:00.787 "can_share": false 00:18:00.787 } 00:18:00.787 } 00:18:00.787 ], 00:18:00.787 "mp_policy": "active_passive" 00:18:00.787 } 00:18:00.787 } 00:18:00.787 ]' 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.787 17:34:33 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:01.049 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:01.049 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:01.049 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2173a849-e7cb-45ef-92f4-b69e97070254 00:18:01.049 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2173a849-e7cb-45ef-92f4-b69e97070254 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:01.309 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:01.569 { 00:18:01.569 "name": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:01.569 "aliases": [ 00:18:01.569 "lvs/nvme0n1p0" 00:18:01.569 ], 00:18:01.569 "product_name": "Logical Volume", 00:18:01.569 "block_size": 4096, 00:18:01.569 "num_blocks": 26476544, 00:18:01.569 "uuid": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:01.569 "assigned_rate_limits": { 00:18:01.569 "rw_ios_per_sec": 0, 00:18:01.569 "rw_mbytes_per_sec": 0, 00:18:01.569 "r_mbytes_per_sec": 0, 00:18:01.569 "w_mbytes_per_sec": 0 00:18:01.569 }, 00:18:01.569 "claimed": false, 00:18:01.569 "zoned": false, 00:18:01.569 "supported_io_types": { 00:18:01.569 "read": true, 00:18:01.569 "write": true, 00:18:01.569 "unmap": true, 00:18:01.569 "flush": false, 00:18:01.569 "reset": true, 00:18:01.569 "nvme_admin": false, 00:18:01.569 "nvme_io": false, 00:18:01.569 "nvme_io_md": false, 00:18:01.569 "write_zeroes": true, 00:18:01.569 "zcopy": false, 00:18:01.569 "get_zone_info": false, 00:18:01.569 "zone_management": false, 00:18:01.569 "zone_append": false, 00:18:01.569 "compare": false, 00:18:01.569 "compare_and_write": false, 00:18:01.569 "abort": false, 00:18:01.569 "seek_hole": true, 00:18:01.569 "seek_data": true, 00:18:01.569 "copy": false, 00:18:01.569 "nvme_iov_md": false 00:18:01.569 }, 00:18:01.569 "driver_specific": { 00:18:01.569 "lvol": { 00:18:01.569 "lvol_store_uuid": "2173a849-e7cb-45ef-92f4-b69e97070254", 00:18:01.569 "base_bdev": "nvme0n1", 00:18:01.569 "thin_provision": true, 00:18:01.569 "num_allocated_clusters": 0, 00:18:01.569 "snapshot": false, 00:18:01.569 "clone": false, 00:18:01.569 "esnap_clone": false 00:18:01.569 } 00:18:01.569 } 00:18:01.569 } 00:18:01.569 ]' 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:01.569 17:34:34 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:01.829 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:02.090 { 00:18:02.090 "name": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:02.090 "aliases": [ 00:18:02.090 "lvs/nvme0n1p0" 00:18:02.090 ], 00:18:02.090 "product_name": "Logical Volume", 00:18:02.090 "block_size": 4096, 00:18:02.090 "num_blocks": 26476544, 00:18:02.090 "uuid": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:02.090 "assigned_rate_limits": { 00:18:02.090 "rw_ios_per_sec": 0, 00:18:02.090 "rw_mbytes_per_sec": 0, 00:18:02.090 "r_mbytes_per_sec": 0, 00:18:02.090 "w_mbytes_per_sec": 0 00:18:02.090 }, 00:18:02.090 "claimed": false, 00:18:02.090 "zoned": false, 00:18:02.090 "supported_io_types": { 00:18:02.090 "read": true, 00:18:02.090 "write": true, 00:18:02.090 "unmap": true, 00:18:02.090 "flush": false, 00:18:02.090 "reset": true, 00:18:02.090 "nvme_admin": false, 00:18:02.090 "nvme_io": false, 00:18:02.090 "nvme_io_md": false, 00:18:02.090 "write_zeroes": true, 00:18:02.090 "zcopy": false, 00:18:02.090 "get_zone_info": false, 00:18:02.090 "zone_management": false, 00:18:02.090 "zone_append": false, 00:18:02.090 "compare": false, 00:18:02.090 "compare_and_write": false, 00:18:02.090 "abort": false, 00:18:02.090 "seek_hole": true, 00:18:02.090 "seek_data": true, 00:18:02.090 "copy": false, 00:18:02.090 "nvme_iov_md": false 00:18:02.090 }, 00:18:02.090 "driver_specific": { 00:18:02.090 "lvol": { 00:18:02.090 "lvol_store_uuid": "2173a849-e7cb-45ef-92f4-b69e97070254", 00:18:02.090 "base_bdev": "nvme0n1", 00:18:02.090 "thin_provision": true, 00:18:02.090 "num_allocated_clusters": 0, 00:18:02.090 "snapshot": false, 00:18:02.090 "clone": false, 00:18:02.090 "esnap_clone": false 00:18:02.090 } 00:18:02.090 } 00:18:02.090 } 00:18:02.090 ]' 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:02.090 17:34:35 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:02.348 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:02.348 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a8ff329-3ed0-49e8-b6d9-3205338d5aac 00:18:02.606 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:02.606 { 00:18:02.606 "name": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:02.606 "aliases": [ 00:18:02.606 "lvs/nvme0n1p0" 00:18:02.606 ], 00:18:02.606 "product_name": "Logical Volume", 00:18:02.606 "block_size": 4096, 00:18:02.606 "num_blocks": 26476544, 00:18:02.606 "uuid": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:02.606 "assigned_rate_limits": { 00:18:02.606 "rw_ios_per_sec": 0, 00:18:02.606 "rw_mbytes_per_sec": 0, 00:18:02.606 "r_mbytes_per_sec": 0, 00:18:02.607 "w_mbytes_per_sec": 0 00:18:02.607 }, 00:18:02.607 "claimed": false, 00:18:02.607 "zoned": false, 00:18:02.607 "supported_io_types": { 00:18:02.607 "read": true, 00:18:02.607 "write": true, 00:18:02.607 "unmap": true, 00:18:02.607 "flush": false, 00:18:02.607 "reset": true, 00:18:02.607 "nvme_admin": false, 00:18:02.607 "nvme_io": false, 00:18:02.607 "nvme_io_md": false, 00:18:02.607 "write_zeroes": true, 00:18:02.607 "zcopy": false, 00:18:02.607 "get_zone_info": false, 00:18:02.607 "zone_management": false, 00:18:02.607 "zone_append": false, 00:18:02.607 "compare": false, 00:18:02.607 "compare_and_write": false, 00:18:02.607 "abort": false, 00:18:02.607 "seek_hole": true, 00:18:02.607 "seek_data": true, 00:18:02.607 "copy": false, 00:18:02.607 "nvme_iov_md": false 00:18:02.607 }, 00:18:02.607 "driver_specific": { 00:18:02.607 "lvol": { 00:18:02.607 "lvol_store_uuid": "2173a849-e7cb-45ef-92f4-b69e97070254", 00:18:02.607 "base_bdev": "nvme0n1", 00:18:02.607 "thin_provision": true, 00:18:02.607 "num_allocated_clusters": 0, 00:18:02.607 "snapshot": false, 00:18:02.607 "clone": false, 00:18:02.607 "esnap_clone": false 00:18:02.607 } 00:18:02.607 } 00:18:02.607 } 00:18:02.607 ]' 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:02.607 17:34:35 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3a8ff329-3ed0-49e8-b6d9-3205338d5aac -c nvc0n1p0 --l2p_dram_limit 60 00:18:02.866 [2024-12-07 17:34:36.032612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.866 [2024-12-07 17:34:36.032655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:02.867 [2024-12-07 17:34:36.032670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:02.867 [2024-12-07 17:34:36.032677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.032724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.032734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.867 [2024-12-07 17:34:36.032744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:02.867 [2024-12-07 17:34:36.032750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.032785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:02.867 [2024-12-07 17:34:36.033374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:02.867 [2024-12-07 17:34:36.033397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.033404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.867 [2024-12-07 17:34:36.033412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:18:02.867 [2024-12-07 17:34:36.033418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.033455] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ca9177ae-493f-4f2e-82ed-b09e2b98605c 00:18:02.867 [2024-12-07 17:34:36.034773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.034798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:02.867 [2024-12-07 17:34:36.034807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:02.867 [2024-12-07 17:34:36.034816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.041675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.041706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.867 [2024-12-07 17:34:36.041715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.777 ms 00:18:02.867 [2024-12-07 17:34:36.041722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.041814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.041824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.867 [2024-12-07 17:34:36.041831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:02.867 [2024-12-07 17:34:36.041842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.041894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.041903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:02.867 [2024-12-07 17:34:36.041909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.867 [2024-12-07 17:34:36.041917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.041941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:02.867 [2024-12-07 17:34:36.045170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.045198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.867 [2024-12-07 17:34:36.045209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:18:02.867 [2024-12-07 17:34:36.045217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.045262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.045270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:02.867 [2024-12-07 17:34:36.045278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:02.867 [2024-12-07 17:34:36.045284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.045304] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:02.867 [2024-12-07 17:34:36.045429] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:02.867 [2024-12-07 17:34:36.045446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:02.867 [2024-12-07 17:34:36.045455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:02.867 [2024-12-07 17:34:36.045465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045472] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045481] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:02.867 [2024-12-07 17:34:36.045488] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:02.867 [2024-12-07 17:34:36.045495] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:02.867 [2024-12-07 17:34:36.045501] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:02.867 [2024-12-07 17:34:36.045509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.045516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:02.867 [2024-12-07 17:34:36.045524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:02.867 [2024-12-07 17:34:36.045529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.045613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.867 [2024-12-07 17:34:36.045619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:02.867 [2024-12-07 17:34:36.045627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:02.867 [2024-12-07 17:34:36.045632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.867 [2024-12-07 17:34:36.045731] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:02.867 [2024-12-07 17:34:36.045744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:02.867 [2024-12-07 17:34:36.045754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:02.867 [2024-12-07 17:34:36.045773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:02.867 [2024-12-07 17:34:36.045794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.867 [2024-12-07 17:34:36.045806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:02.867 [2024-12-07 17:34:36.045811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:02.867 [2024-12-07 17:34:36.045821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:02.867 [2024-12-07 17:34:36.045826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:02.867 [2024-12-07 17:34:36.045833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:02.867 [2024-12-07 17:34:36.045838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:02.867 [2024-12-07 17:34:36.045852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:02.867 [2024-12-07 17:34:36.045869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:02.867 [2024-12-07 17:34:36.045887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:02.867 [2024-12-07 17:34:36.045905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:02.867 [2024-12-07 17:34:36.045921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:02.867 [2024-12-07 17:34:36.045933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:02.867 [2024-12-07 17:34:36.045941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:02.867 [2024-12-07 17:34:36.045959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.867 [2024-12-07 17:34:36.045966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:02.867 [2024-12-07 17:34:36.045971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:02.867 [2024-12-07 17:34:36.045977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:02.867 [2024-12-07 17:34:36.045995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:02.867 [2024-12-07 17:34:36.046002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:02.867 [2024-12-07 17:34:36.046007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.046014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:02.867 [2024-12-07 17:34:36.046019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:02.867 [2024-12-07 17:34:36.046026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.046031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:02.867 [2024-12-07 17:34:36.046041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:02.867 [2024-12-07 17:34:36.046047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:02.867 [2024-12-07 17:34:36.046054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:02.867 [2024-12-07 17:34:36.046060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:02.868 [2024-12-07 17:34:36.046070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:02.868 [2024-12-07 17:34:36.046075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:02.868 [2024-12-07 17:34:36.046082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:02.868 [2024-12-07 17:34:36.046087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:02.868 [2024-12-07 17:34:36.046094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:02.868 [2024-12-07 17:34:36.046100] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:02.868 [2024-12-07 17:34:36.046110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:02.868 [2024-12-07 17:34:36.046123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:02.868 [2024-12-07 17:34:36.046128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:02.868 [2024-12-07 17:34:36.046135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:02.868 [2024-12-07 17:34:36.046141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:02.868 [2024-12-07 17:34:36.046149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:02.868 [2024-12-07 17:34:36.046154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:02.868 [2024-12-07 17:34:36.046161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:02.868 [2024-12-07 17:34:36.046167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:02.868 [2024-12-07 17:34:36.046176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:02.868 [2024-12-07 17:34:36.046205] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:02.868 [2024-12-07 17:34:36.046213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:02.868 [2024-12-07 17:34:36.046228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:02.868 [2024-12-07 17:34:36.046234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:02.868 [2024-12-07 17:34:36.046241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:02.868 [2024-12-07 17:34:36.046246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.868 [2024-12-07 17:34:36.046256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:02.868 [2024-12-07 17:34:36.046262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:18:02.868 [2024-12-07 17:34:36.046269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.868 [2024-12-07 17:34:36.046336] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:02.868 [2024-12-07 17:34:36.046349] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:05.399 [2024-12-07 17:34:38.661443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.661521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:05.399 [2024-12-07 17:34:38.661538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2615.096 ms 00:18:05.399 [2024-12-07 17:34:38.661549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.689792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.689840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.399 [2024-12-07 17:34:38.689853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.014 ms 00:18:05.399 [2024-12-07 17:34:38.689863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.690011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.690024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.399 [2024-12-07 17:34:38.690034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:05.399 [2024-12-07 17:34:38.690046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.737893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.737940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.399 [2024-12-07 17:34:38.737957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.799 ms 00:18:05.399 [2024-12-07 17:34:38.737967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.738024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.738037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.399 [2024-12-07 17:34:38.738046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:05.399 [2024-12-07 17:34:38.738056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.738525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.738553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.399 [2024-12-07 17:34:38.738562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:18:05.399 [2024-12-07 17:34:38.738574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.738708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.738723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.399 [2024-12-07 17:34:38.738732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:05.399 [2024-12-07 17:34:38.738744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.754672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.399 [2024-12-07 17:34:38.754708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.399 [2024-12-07 17:34:38.754718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.904 ms 00:18:05.399 [2024-12-07 17:34:38.754727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.399 [2024-12-07 17:34:38.767027] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:05.657 [2024-12-07 17:34:38.784314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.657 [2024-12-07 17:34:38.784348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.657 [2024-12-07 17:34:38.784362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.493 ms 00:18:05.657 [2024-12-07 17:34:38.784371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.657 [2024-12-07 17:34:38.837442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.837482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:05.658 [2024-12-07 17:34:38.837499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.034 ms 00:18:05.658 [2024-12-07 17:34:38.837507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.837716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.837728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.658 [2024-12-07 17:34:38.837741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:18:05.658 [2024-12-07 17:34:38.837749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.860564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.860599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:05.658 [2024-12-07 17:34:38.860612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.760 ms 00:18:05.658 [2024-12-07 17:34:38.860620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.882786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.882817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:05.658 [2024-12-07 17:34:38.882830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.124 ms 00:18:05.658 [2024-12-07 17:34:38.882837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.883437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.883455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.658 [2024-12-07 17:34:38.883466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:18:05.658 [2024-12-07 17:34:38.883473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.950858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.950893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:05.658 [2024-12-07 17:34:38.950908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.339 ms 00:18:05.658 [2024-12-07 17:34:38.950918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.975366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.975399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:05.658 [2024-12-07 17:34:38.975412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.352 ms 00:18:05.658 [2024-12-07 17:34:38.975420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:38.998309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:38.998340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:05.658 [2024-12-07 17:34:38.998352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.843 ms 00:18:05.658 [2024-12-07 17:34:38.998360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:39.021746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:39.021781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.658 [2024-12-07 17:34:39.021795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.343 ms 00:18:05.658 [2024-12-07 17:34:39.021802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:39.021856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:39.021866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.658 [2024-12-07 17:34:39.021882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:05.658 [2024-12-07 17:34:39.021890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:39.021975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.658 [2024-12-07 17:34:39.021997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.658 [2024-12-07 17:34:39.022008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:05.658 [2024-12-07 17:34:39.022015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.658 [2024-12-07 17:34:39.023052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2989.956 ms, result 0 00:18:05.658 { 00:18:05.658 "name": "ftl0", 00:18:05.658 "uuid": "ca9177ae-493f-4f2e-82ed-b09e2b98605c" 00:18:05.658 } 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:05.916 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:06.173 [ 00:18:06.173 { 00:18:06.173 "name": "ftl0", 00:18:06.173 "aliases": [ 00:18:06.173 "ca9177ae-493f-4f2e-82ed-b09e2b98605c" 00:18:06.173 ], 00:18:06.173 "product_name": "FTL disk", 00:18:06.173 "block_size": 4096, 00:18:06.173 "num_blocks": 20971520, 00:18:06.173 "uuid": "ca9177ae-493f-4f2e-82ed-b09e2b98605c", 00:18:06.173 "assigned_rate_limits": { 00:18:06.173 "rw_ios_per_sec": 0, 00:18:06.173 "rw_mbytes_per_sec": 0, 00:18:06.173 "r_mbytes_per_sec": 0, 00:18:06.173 "w_mbytes_per_sec": 0 00:18:06.173 }, 00:18:06.173 "claimed": false, 00:18:06.173 "zoned": false, 00:18:06.173 "supported_io_types": { 00:18:06.173 "read": true, 00:18:06.173 "write": true, 00:18:06.173 "unmap": true, 00:18:06.173 "flush": true, 00:18:06.174 "reset": false, 00:18:06.174 "nvme_admin": false, 00:18:06.174 "nvme_io": false, 00:18:06.174 "nvme_io_md": false, 00:18:06.174 "write_zeroes": true, 00:18:06.174 "zcopy": false, 00:18:06.174 "get_zone_info": false, 00:18:06.174 "zone_management": false, 00:18:06.174 "zone_append": false, 00:18:06.174 "compare": false, 00:18:06.174 "compare_and_write": false, 00:18:06.174 "abort": false, 00:18:06.174 "seek_hole": false, 00:18:06.174 "seek_data": false, 00:18:06.174 "copy": false, 00:18:06.174 "nvme_iov_md": false 00:18:06.174 }, 00:18:06.174 "driver_specific": { 00:18:06.174 "ftl": { 00:18:06.174 "base_bdev": "3a8ff329-3ed0-49e8-b6d9-3205338d5aac", 00:18:06.174 "cache": "nvc0n1p0" 00:18:06.174 } 00:18:06.174 } 00:18:06.174 } 00:18:06.174 ] 00:18:06.174 17:34:39 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:06.174 17:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:06.174 17:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:06.432 17:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:06.432 17:34:39 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:06.432 [2024-12-07 17:34:39.747596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.747635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.432 [2024-12-07 17:34:39.747645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:06.432 [2024-12-07 17:34:39.747653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.747685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.432 [2024-12-07 17:34:39.749920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.749946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.432 [2024-12-07 17:34:39.749956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:18:06.432 [2024-12-07 17:34:39.749963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.750377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.750391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.432 [2024-12-07 17:34:39.750399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:18:06.432 [2024-12-07 17:34:39.750405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.752859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.752880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.432 [2024-12-07 17:34:39.752889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.431 ms 00:18:06.432 [2024-12-07 17:34:39.752895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.757627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.757650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.432 [2024-12-07 17:34:39.757659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:18:06.432 [2024-12-07 17:34:39.757666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.776431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.776460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.432 [2024-12-07 17:34:39.776481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.695 ms 00:18:06.432 [2024-12-07 17:34:39.776487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.788913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.788942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.432 [2024-12-07 17:34:39.788956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.389 ms 00:18:06.432 [2024-12-07 17:34:39.788962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.789126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.789134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.432 [2024-12-07 17:34:39.789143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:06.432 [2024-12-07 17:34:39.789149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.432 [2024-12-07 17:34:39.806932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.432 [2024-12-07 17:34:39.806960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:06.432 [2024-12-07 17:34:39.806970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.760 ms 00:18:06.432 [2024-12-07 17:34:39.806976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.691 [2024-12-07 17:34:39.824189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.691 [2024-12-07 17:34:39.824216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:06.691 [2024-12-07 17:34:39.824226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.166 ms 00:18:06.691 [2024-12-07 17:34:39.824231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.691 [2024-12-07 17:34:39.841390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.691 [2024-12-07 17:34:39.841424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.691 [2024-12-07 17:34:39.841434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.117 ms 00:18:06.691 [2024-12-07 17:34:39.841440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.691 [2024-12-07 17:34:39.858377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.691 [2024-12-07 17:34:39.858403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.691 [2024-12-07 17:34:39.858412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.847 ms 00:18:06.691 [2024-12-07 17:34:39.858417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.691 [2024-12-07 17:34:39.858453] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.691 [2024-12-07 17:34:39.858465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.691 [2024-12-07 17:34:39.858784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.858994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.692 [2024-12-07 17:34:39.859175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.692 [2024-12-07 17:34:39.859183] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ca9177ae-493f-4f2e-82ed-b09e2b98605c 00:18:06.692 [2024-12-07 17:34:39.859189] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.692 [2024-12-07 17:34:39.859198] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.692 [2024-12-07 17:34:39.859204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.692 [2024-12-07 17:34:39.859213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.692 [2024-12-07 17:34:39.859218] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.692 [2024-12-07 17:34:39.859226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.692 [2024-12-07 17:34:39.859231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.692 [2024-12-07 17:34:39.859237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.692 [2024-12-07 17:34:39.859242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.692 [2024-12-07 17:34:39.859249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.692 [2024-12-07 17:34:39.859255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.692 [2024-12-07 17:34:39.859262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:18:06.692 [2024-12-07 17:34:39.859268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.869377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.692 [2024-12-07 17:34:39.869405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.692 [2024-12-07 17:34:39.869415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.077 ms 00:18:06.692 [2024-12-07 17:34:39.869420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.869732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.692 [2024-12-07 17:34:39.869745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.692 [2024-12-07 17:34:39.869753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:18:06.692 [2024-12-07 17:34:39.869759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.906019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:39.906049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.692 [2024-12-07 17:34:39.906058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:39.906065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.906122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:39.906129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.692 [2024-12-07 17:34:39.906137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:39.906142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.906215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:39.906225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.692 [2024-12-07 17:34:39.906233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:39.906239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.906263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:39.906269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.692 [2024-12-07 17:34:39.906277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:39.906283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:39.972569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:39.972612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.692 [2024-12-07 17:34:39.972624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:39.972631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.692 [2024-12-07 17:34:40.023946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.692 [2024-12-07 17:34:40.024000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.692 [2024-12-07 17:34:40.024013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.692 [2024-12-07 17:34:40.024020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.693 [2024-12-07 17:34:40.024137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.693 [2024-12-07 17:34:40.024216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.693 [2024-12-07 17:34:40.024340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.693 [2024-12-07 17:34:40.024429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.693 [2024-12-07 17:34:40.024502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.693 [2024-12-07 17:34:40.024573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.693 [2024-12-07 17:34:40.024581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.693 [2024-12-07 17:34:40.024587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.693 [2024-12-07 17:34:40.024741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.115 ms, result 0 00:18:06.693 true 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75109 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75109 ']' 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75109 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75109 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.693 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.951 killing process with pid 75109 00:18:06.951 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75109' 00:18:06.951 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75109 00:18:06.951 17:34:40 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75109 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:13.513 17:34:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:13.513 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:13.513 fio-3.35 00:18:13.513 Starting 1 thread 00:18:17.701 00:18:17.701 test: (groupid=0, jobs=1): err= 0: pid=75291: Sat Dec 7 17:34:50 2024 00:18:17.701 read: IOPS=1131, BW=75.1MiB/s (78.8MB/s)(255MiB/3388msec) 00:18:17.701 slat (nsec): min=4259, max=19843, avg=5606.47, stdev=1865.76 00:18:17.701 clat (usec): min=278, max=1372, avg=395.00, stdev=102.91 00:18:17.701 lat (usec): min=283, max=1377, avg=400.60, stdev=103.27 00:18:17.701 clat percentiles (usec): 00:18:17.701 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:18:17.701 | 30.00th=[ 334], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 392], 00:18:17.701 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 519], 95.00th=[ 603], 00:18:17.701 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 955], 99.95th=[ 1106], 00:18:17.701 | 99.99th=[ 1369] 00:18:17.701 write: IOPS=1139, BW=75.6MiB/s (79.3MB/s)(256MiB/3385msec); 0 zone resets 00:18:17.701 slat (nsec): min=14745, max=69892, avg=19125.61, stdev=3153.28 00:18:17.701 clat (usec): min=302, max=1959, avg=447.97, stdev=147.54 00:18:17.701 lat (usec): min=326, max=1992, avg=467.09, stdev=148.22 00:18:17.701 clat percentiles (usec): 00:18:17.701 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[ 355], 00:18:17.701 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 441], 00:18:17.701 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 570], 95.00th=[ 750], 00:18:17.701 | 99.00th=[ 955], 99.50th=[ 1156], 99.90th=[ 1795], 99.95th=[ 1926], 00:18:17.701 | 99.99th=[ 1958] 00:18:17.701 bw ( KiB/s): min=71944, max=83232, per=99.28%, avg=76905.33, stdev=3965.86, samples=6 00:18:17.701 iops : min= 1058, max= 1224, avg=1130.83, stdev=58.37, samples=6 00:18:17.701 lat (usec) : 500=84.77%, 750=11.61%, 1000=3.16% 00:18:17.701 lat (msec) : 2=0.46% 00:18:17.701 cpu : usr=99.29%, sys=0.06%, ctx=6, majf=0, minf=1169 00:18:17.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:17.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.701 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.701 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:17.701 00:18:17.701 Run status group 0 (all jobs): 00:18:17.701 READ: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=255MiB (267MB), run=3388-3388msec 00:18:17.701 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=256MiB (269MB), run=3385-3385msec 00:18:19.088 ----------------------------------------------------- 00:18:19.088 Suppressions used: 00:18:19.088 count bytes template 00:18:19.088 1 5 /usr/src/fio/parse.c 00:18:19.088 1 8 libtcmalloc_minimal.so 00:18:19.088 1 904 libcrypto.so 00:18:19.088 ----------------------------------------------------- 00:18:19.088 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.088 17:34:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:19.088 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:19.088 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:19.088 fio-3.35 00:18:19.088 Starting 2 threads 00:18:45.646 00:18:45.646 first_half: (groupid=0, jobs=1): err= 0: pid=75388: Sat Dec 7 17:35:16 2024 00:18:45.646 read: IOPS=2772, BW=10.8MiB/s (11.4MB/s)(255MiB/23584msec) 00:18:45.646 slat (nsec): min=3102, max=27955, avg=4399.43, stdev=1428.42 00:18:45.646 clat (usec): min=701, max=353830, avg=35157.30, stdev=20861.22 00:18:45.646 lat (usec): min=710, max=353835, avg=35161.70, stdev=20861.39 00:18:45.646 clat percentiles (msec): 00:18:45.646 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:18:45.646 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:45.646 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 46], 00:18:45.646 | 99.00th=[ 148], 99.50th=[ 163], 99.90th=[ 251], 99.95th=[ 275], 00:18:45.646 | 99.99th=[ 351] 00:18:45.646 write: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(256MiB/22632msec); 0 zone resets 00:18:45.646 slat (usec): min=3, max=289, avg= 6.04, stdev= 3.12 00:18:45.646 clat (usec): min=385, max=98382, avg=10953.82, stdev=18438.45 00:18:45.646 lat (usec): min=392, max=98387, avg=10959.86, stdev=18438.72 00:18:45.646 clat percentiles (usec): 00:18:45.646 | 1.00th=[ 709], 5.00th=[ 914], 10.00th=[ 1123], 20.00th=[ 1827], 00:18:45.646 | 30.00th=[ 3064], 40.00th=[ 4228], 50.00th=[ 5014], 60.00th=[ 5538], 00:18:45.646 | 70.00th=[ 6587], 80.00th=[12387], 90.00th=[20841], 95.00th=[67634], 00:18:45.646 | 99.00th=[86508], 99.50th=[90702], 99.90th=[94897], 99.95th=[95945], 00:18:45.646 | 99.99th=[98042] 00:18:45.646 bw ( KiB/s): min= 880, max=41792, per=94.29%, avg=21843.71, stdev=13742.64, samples=24 00:18:45.646 iops : min= 220, max=10448, avg=5460.92, stdev=3435.66, samples=24 00:18:45.646 lat (usec) : 500=0.02%, 750=0.88%, 1000=2.57% 00:18:45.646 lat (msec) : 2=7.19%, 4=8.47%, 10=21.09%, 20=6.85%, 50=47.19% 00:18:45.646 lat (msec) : 100=4.59%, 250=1.11%, 500=0.05% 00:18:45.646 cpu : usr=99.38%, sys=0.09%, ctx=49, majf=0, minf=5536 00:18:45.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:45.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.646 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.646 issued rwts: total=65389,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.646 second_half: (groupid=0, jobs=1): err= 0: pid=75389: Sat Dec 7 17:35:16 2024 00:18:45.646 read: IOPS=2787, BW=10.9MiB/s (11.4MB/s)(255MiB/23407msec) 00:18:45.646 slat (nsec): min=3079, max=32972, avg=5150.47, stdev=1353.04 00:18:45.646 clat (usec): min=619, max=346863, avg=35658.46, stdev=18759.83 00:18:45.646 lat (usec): min=623, max=346867, avg=35663.61, stdev=18759.94 00:18:45.646 clat percentiles (msec): 00:18:45.646 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:18:45.646 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:18:45.646 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 50], 00:18:45.646 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 192], 99.95th=[ 262], 00:18:45.646 | 99.99th=[ 330] 00:18:45.646 write: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(256MiB/17850msec); 0 zone resets 00:18:45.646 slat (usec): min=3, max=767, avg= 6.41, stdev= 6.64 00:18:45.646 clat (usec): min=353, max=97242, avg=10194.81, stdev=17919.56 00:18:45.646 lat (usec): min=361, max=97246, avg=10201.23, stdev=17919.53 00:18:45.646 clat percentiles (usec): 00:18:45.646 | 1.00th=[ 783], 5.00th=[ 1004], 10.00th=[ 1172], 20.00th=[ 1582], 00:18:45.646 | 30.00th=[ 2507], 40.00th=[ 3490], 50.00th=[ 4817], 60.00th=[ 5604], 00:18:45.646 | 70.00th=[ 6652], 80.00th=[10814], 90.00th=[17433], 95.00th=[66847], 00:18:45.646 | 99.00th=[84411], 99.50th=[90702], 99.90th=[94897], 99.95th=[95945], 00:18:45.646 | 99.99th=[96994] 00:18:45.646 bw ( KiB/s): min= 5264, max=42168, per=100.00%, avg=24962.71, stdev=11605.92, samples=21 00:18:45.646 iops : min= 1316, max=10542, avg=6240.67, stdev=2901.47, samples=21 00:18:45.646 lat (usec) : 500=0.02%, 750=0.37%, 1000=2.08% 00:18:45.646 lat (msec) : 2=10.11%, 4=9.92%, 10=17.24%, 20=7.14%, 50=47.29% 00:18:45.646 lat (msec) : 100=4.61%, 250=1.19%, 500=0.03% 00:18:45.646 cpu : usr=99.22%, sys=0.12%, ctx=64, majf=0, minf=5585 00:18:45.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:45.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.646 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.646 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.646 00:18:45.646 Run status group 0 (all jobs): 00:18:45.646 READ: bw=21.6MiB/s (22.7MB/s), 10.8MiB/s-10.9MiB/s (11.4MB/s-11.4MB/s), io=510MiB (535MB), run=23407-23584msec 00:18:45.646 WRITE: bw=22.6MiB/s (23.7MB/s), 11.3MiB/s-14.3MiB/s (11.9MB/s-15.0MB/s), io=512MiB (537MB), run=17850-22632msec 00:18:46.218 ----------------------------------------------------- 00:18:46.218 Suppressions used: 00:18:46.218 count bytes template 00:18:46.218 2 10 /usr/src/fio/parse.c 00:18:46.218 4 384 /usr/src/fio/iolog.c 00:18:46.218 1 8 libtcmalloc_minimal.so 00:18:46.218 1 904 libcrypto.so 00:18:46.218 ----------------------------------------------------- 00:18:46.218 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:46.218 17:35:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:46.479 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:46.479 fio-3.35 00:18:46.479 Starting 1 thread 00:19:04.622 00:19:04.622 test: (groupid=0, jobs=1): err= 0: pid=75703: Sat Dec 7 17:35:35 2024 00:19:04.622 read: IOPS=7666, BW=29.9MiB/s (31.4MB/s)(255MiB/8505msec) 00:19:04.622 slat (nsec): min=3108, max=19005, avg=3635.79, stdev=725.14 00:19:04.622 clat (usec): min=837, max=39876, avg=16689.69, stdev=2386.95 00:19:04.622 lat (usec): min=841, max=39879, avg=16693.32, stdev=2387.08 00:19:04.622 clat percentiles (usec): 00:19:04.622 | 1.00th=[14091], 5.00th=[14615], 10.00th=[15008], 20.00th=[15270], 00:19:04.622 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:19:04.622 | 70.00th=[16909], 80.00th=[17171], 90.00th=[18482], 95.00th=[21365], 00:19:04.622 | 99.00th=[27132], 99.50th=[29754], 99.90th=[35390], 99.95th=[36439], 00:19:04.622 | 99.99th=[38011] 00:19:04.622 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(256MiB/6513msec); 0 zone resets 00:19:04.622 slat (usec): min=4, max=941, avg= 7.84, stdev= 8.48 00:19:04.622 clat (usec): min=493, max=76653, avg=12657.72, stdev=16498.80 00:19:04.622 lat (usec): min=500, max=76658, avg=12665.56, stdev=16499.13 00:19:04.622 clat percentiles (usec): 00:19:04.622 | 1.00th=[ 783], 5.00th=[ 1045], 10.00th=[ 1270], 20.00th=[ 1745], 00:19:04.622 | 30.00th=[ 2212], 40.00th=[ 3195], 50.00th=[ 6849], 60.00th=[ 8848], 00:19:04.622 | 70.00th=[11600], 80.00th=[15795], 90.00th=[39060], 95.00th=[57934], 00:19:04.622 | 99.00th=[64226], 99.50th=[65799], 99.90th=[68682], 99.95th=[69731], 00:19:04.622 | 99.99th=[76022] 00:19:04.622 bw ( KiB/s): min= 1016, max=69064, per=93.03%, avg=37445.14, stdev=16049.73, samples=14 00:19:04.623 iops : min= 254, max=17266, avg=9361.29, stdev=4012.43, samples=14 00:19:04.623 lat (usec) : 500=0.01%, 750=0.37%, 1000=1.73% 00:19:04.623 lat (msec) : 2=10.89%, 4=7.83%, 10=11.73%, 20=55.74%, 50=7.97% 00:19:04.623 lat (msec) : 100=3.75% 00:19:04.623 cpu : usr=99.05%, sys=0.21%, ctx=31, majf=0, minf=5565 00:19:04.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:04.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.623 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.623 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.623 00:19:04.623 Run status group 0 (all jobs): 00:19:04.623 READ: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=255MiB (267MB), run=8505-8505msec 00:19:04.623 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=256MiB (268MB), run=6513-6513msec 00:19:04.623 ----------------------------------------------------- 00:19:04.623 Suppressions used: 00:19:04.623 count bytes template 00:19:04.623 1 5 /usr/src/fio/parse.c 00:19:04.623 2 192 /usr/src/fio/iolog.c 00:19:04.623 1 8 libtcmalloc_minimal.so 00:19:04.623 1 904 libcrypto.so 00:19:04.623 ----------------------------------------------------- 00:19:04.623 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:04.623 Remove shared memory files 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57137 /dev/shm/spdk_tgt_trace.pid74027 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:04.623 00:19:04.623 real 1m5.097s 00:19:04.623 user 2m21.956s 00:19:04.623 sys 0m2.809s 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.623 17:35:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:04.623 ************************************ 00:19:04.623 END TEST ftl_fio_basic 00:19:04.623 ************************************ 00:19:04.623 17:35:37 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:04.623 17:35:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:04.623 17:35:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.623 17:35:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:04.623 ************************************ 00:19:04.623 START TEST ftl_bdevperf 00:19:04.623 ************************************ 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:04.623 * Looking for test storage... 00:19:04.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.623 --rc genhtml_branch_coverage=1 00:19:04.623 --rc genhtml_function_coverage=1 00:19:04.623 --rc genhtml_legend=1 00:19:04.623 --rc geninfo_all_blocks=1 00:19:04.623 --rc geninfo_unexecuted_blocks=1 00:19:04.623 00:19:04.623 ' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.623 --rc genhtml_branch_coverage=1 00:19:04.623 --rc genhtml_function_coverage=1 00:19:04.623 --rc genhtml_legend=1 00:19:04.623 --rc geninfo_all_blocks=1 00:19:04.623 --rc geninfo_unexecuted_blocks=1 00:19:04.623 00:19:04.623 ' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.623 --rc genhtml_branch_coverage=1 00:19:04.623 --rc genhtml_function_coverage=1 00:19:04.623 --rc genhtml_legend=1 00:19:04.623 --rc geninfo_all_blocks=1 00:19:04.623 --rc geninfo_unexecuted_blocks=1 00:19:04.623 00:19:04.623 ' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:04.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.623 --rc genhtml_branch_coverage=1 00:19:04.623 --rc genhtml_function_coverage=1 00:19:04.623 --rc genhtml_legend=1 00:19:04.623 --rc geninfo_all_blocks=1 00:19:04.623 --rc geninfo_unexecuted_blocks=1 00:19:04.623 00:19:04.623 ' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.623 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75958 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75958 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75958 ']' 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:04.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.624 17:35:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:04.624 [2024-12-07 17:35:37.795645] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:04.624 [2024-12-07 17:35:37.796550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75958 ] 00:19:04.624 [2024-12-07 17:35:37.968429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.886 [2024-12-07 17:35:38.079380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:05.461 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:05.723 17:35:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:05.984 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.984 { 00:19:05.984 "name": "nvme0n1", 00:19:05.984 "aliases": [ 00:19:05.984 "3eb5b784-da3c-44c5-a786-0d38731272f7" 00:19:05.984 ], 00:19:05.984 "product_name": "NVMe disk", 00:19:05.984 "block_size": 4096, 00:19:05.984 "num_blocks": 1310720, 00:19:05.984 "uuid": "3eb5b784-da3c-44c5-a786-0d38731272f7", 00:19:05.984 "numa_id": -1, 00:19:05.984 "assigned_rate_limits": { 00:19:05.984 "rw_ios_per_sec": 0, 00:19:05.984 "rw_mbytes_per_sec": 0, 00:19:05.984 "r_mbytes_per_sec": 0, 00:19:05.984 "w_mbytes_per_sec": 0 00:19:05.984 }, 00:19:05.984 "claimed": true, 00:19:05.984 "claim_type": "read_many_write_one", 00:19:05.984 "zoned": false, 00:19:05.984 "supported_io_types": { 00:19:05.984 "read": true, 00:19:05.984 "write": true, 00:19:05.984 "unmap": true, 00:19:05.984 "flush": true, 00:19:05.984 "reset": true, 00:19:05.984 "nvme_admin": true, 00:19:05.984 "nvme_io": true, 00:19:05.984 "nvme_io_md": false, 00:19:05.984 "write_zeroes": true, 00:19:05.984 "zcopy": false, 00:19:05.984 "get_zone_info": false, 00:19:05.984 "zone_management": false, 00:19:05.984 "zone_append": false, 00:19:05.984 "compare": true, 00:19:05.984 "compare_and_write": false, 00:19:05.984 "abort": true, 00:19:05.984 "seek_hole": false, 00:19:05.984 "seek_data": false, 00:19:05.984 "copy": true, 00:19:05.984 "nvme_iov_md": false 00:19:05.984 }, 00:19:05.984 "driver_specific": { 00:19:05.984 "nvme": [ 00:19:05.984 { 00:19:05.984 "pci_address": "0000:00:11.0", 00:19:05.984 "trid": { 00:19:05.984 "trtype": "PCIe", 00:19:05.984 "traddr": "0000:00:11.0" 00:19:05.984 }, 00:19:05.984 "ctrlr_data": { 00:19:05.984 "cntlid": 0, 00:19:05.984 "vendor_id": "0x1b36", 00:19:05.984 "model_number": "QEMU NVMe Ctrl", 00:19:05.984 "serial_number": "12341", 00:19:05.984 "firmware_revision": "8.0.0", 00:19:05.984 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:05.984 "oacs": { 00:19:05.984 "security": 0, 00:19:05.984 "format": 1, 00:19:05.985 "firmware": 0, 00:19:05.985 "ns_manage": 1 00:19:05.985 }, 00:19:05.985 "multi_ctrlr": false, 00:19:05.985 "ana_reporting": false 00:19:05.985 }, 00:19:05.985 "vs": { 00:19:05.985 "nvme_version": "1.4" 00:19:05.985 }, 00:19:05.985 "ns_data": { 00:19:05.985 "id": 1, 00:19:05.985 "can_share": false 00:19:05.985 } 00:19:05.985 } 00:19:05.985 ], 00:19:05.985 "mp_policy": "active_passive" 00:19:05.985 } 00:19:05.985 } 00:19:05.985 ]' 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:05.985 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:06.245 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2173a849-e7cb-45ef-92f4-b69e97070254 00:19:06.245 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:06.245 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2173a849-e7cb-45ef-92f4-b69e97070254 00:19:06.507 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:06.507 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=19a3c42e-94ff-47f6-b31e-5a916da8f69c 00:19:06.507 17:35:39 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 19a3c42e-94ff-47f6-b31e-5a916da8f69c 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:06.768 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.030 { 00:19:07.030 "name": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:07.030 "aliases": [ 00:19:07.030 "lvs/nvme0n1p0" 00:19:07.030 ], 00:19:07.030 "product_name": "Logical Volume", 00:19:07.030 "block_size": 4096, 00:19:07.030 "num_blocks": 26476544, 00:19:07.030 "uuid": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:07.030 "assigned_rate_limits": { 00:19:07.030 "rw_ios_per_sec": 0, 00:19:07.030 "rw_mbytes_per_sec": 0, 00:19:07.030 "r_mbytes_per_sec": 0, 00:19:07.030 "w_mbytes_per_sec": 0 00:19:07.030 }, 00:19:07.030 "claimed": false, 00:19:07.030 "zoned": false, 00:19:07.030 "supported_io_types": { 00:19:07.030 "read": true, 00:19:07.030 "write": true, 00:19:07.030 "unmap": true, 00:19:07.030 "flush": false, 00:19:07.030 "reset": true, 00:19:07.030 "nvme_admin": false, 00:19:07.030 "nvme_io": false, 00:19:07.030 "nvme_io_md": false, 00:19:07.030 "write_zeroes": true, 00:19:07.030 "zcopy": false, 00:19:07.030 "get_zone_info": false, 00:19:07.030 "zone_management": false, 00:19:07.030 "zone_append": false, 00:19:07.030 "compare": false, 00:19:07.030 "compare_and_write": false, 00:19:07.030 "abort": false, 00:19:07.030 "seek_hole": true, 00:19:07.030 "seek_data": true, 00:19:07.030 "copy": false, 00:19:07.030 "nvme_iov_md": false 00:19:07.030 }, 00:19:07.030 "driver_specific": { 00:19:07.030 "lvol": { 00:19:07.030 "lvol_store_uuid": "19a3c42e-94ff-47f6-b31e-5a916da8f69c", 00:19:07.030 "base_bdev": "nvme0n1", 00:19:07.030 "thin_provision": true, 00:19:07.030 "num_allocated_clusters": 0, 00:19:07.030 "snapshot": false, 00:19:07.030 "clone": false, 00:19:07.030 "esnap_clone": false 00:19:07.030 } 00:19:07.030 } 00:19:07.030 } 00:19:07.030 ]' 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:07.030 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:07.291 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.553 { 00:19:07.553 "name": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:07.553 "aliases": [ 00:19:07.553 "lvs/nvme0n1p0" 00:19:07.553 ], 00:19:07.553 "product_name": "Logical Volume", 00:19:07.553 "block_size": 4096, 00:19:07.553 "num_blocks": 26476544, 00:19:07.553 "uuid": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:07.553 "assigned_rate_limits": { 00:19:07.553 "rw_ios_per_sec": 0, 00:19:07.553 "rw_mbytes_per_sec": 0, 00:19:07.553 "r_mbytes_per_sec": 0, 00:19:07.553 "w_mbytes_per_sec": 0 00:19:07.553 }, 00:19:07.553 "claimed": false, 00:19:07.553 "zoned": false, 00:19:07.553 "supported_io_types": { 00:19:07.553 "read": true, 00:19:07.553 "write": true, 00:19:07.553 "unmap": true, 00:19:07.553 "flush": false, 00:19:07.553 "reset": true, 00:19:07.553 "nvme_admin": false, 00:19:07.553 "nvme_io": false, 00:19:07.553 "nvme_io_md": false, 00:19:07.553 "write_zeroes": true, 00:19:07.553 "zcopy": false, 00:19:07.553 "get_zone_info": false, 00:19:07.553 "zone_management": false, 00:19:07.553 "zone_append": false, 00:19:07.553 "compare": false, 00:19:07.553 "compare_and_write": false, 00:19:07.553 "abort": false, 00:19:07.553 "seek_hole": true, 00:19:07.553 "seek_data": true, 00:19:07.553 "copy": false, 00:19:07.553 "nvme_iov_md": false 00:19:07.553 }, 00:19:07.553 "driver_specific": { 00:19:07.553 "lvol": { 00:19:07.553 "lvol_store_uuid": "19a3c42e-94ff-47f6-b31e-5a916da8f69c", 00:19:07.553 "base_bdev": "nvme0n1", 00:19:07.553 "thin_provision": true, 00:19:07.553 "num_allocated_clusters": 0, 00:19:07.553 "snapshot": false, 00:19:07.553 "clone": false, 00:19:07.553 "esnap_clone": false 00:19:07.553 } 00:19:07.553 } 00:19:07.553 } 00:19:07.553 ]' 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:07.553 17:35:40 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:07.815 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b406f57-8208-4bba-87aa-e7ab5b0739cb 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.077 { 00:19:08.077 "name": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:08.077 "aliases": [ 00:19:08.077 "lvs/nvme0n1p0" 00:19:08.077 ], 00:19:08.077 "product_name": "Logical Volume", 00:19:08.077 "block_size": 4096, 00:19:08.077 "num_blocks": 26476544, 00:19:08.077 "uuid": "2b406f57-8208-4bba-87aa-e7ab5b0739cb", 00:19:08.077 "assigned_rate_limits": { 00:19:08.077 "rw_ios_per_sec": 0, 00:19:08.077 "rw_mbytes_per_sec": 0, 00:19:08.077 "r_mbytes_per_sec": 0, 00:19:08.077 "w_mbytes_per_sec": 0 00:19:08.077 }, 00:19:08.077 "claimed": false, 00:19:08.077 "zoned": false, 00:19:08.077 "supported_io_types": { 00:19:08.077 "read": true, 00:19:08.077 "write": true, 00:19:08.077 "unmap": true, 00:19:08.077 "flush": false, 00:19:08.077 "reset": true, 00:19:08.077 "nvme_admin": false, 00:19:08.077 "nvme_io": false, 00:19:08.077 "nvme_io_md": false, 00:19:08.077 "write_zeroes": true, 00:19:08.077 "zcopy": false, 00:19:08.077 "get_zone_info": false, 00:19:08.077 "zone_management": false, 00:19:08.077 "zone_append": false, 00:19:08.077 "compare": false, 00:19:08.077 "compare_and_write": false, 00:19:08.077 "abort": false, 00:19:08.077 "seek_hole": true, 00:19:08.077 "seek_data": true, 00:19:08.077 "copy": false, 00:19:08.077 "nvme_iov_md": false 00:19:08.077 }, 00:19:08.077 "driver_specific": { 00:19:08.077 "lvol": { 00:19:08.077 "lvol_store_uuid": "19a3c42e-94ff-47f6-b31e-5a916da8f69c", 00:19:08.077 "base_bdev": "nvme0n1", 00:19:08.077 "thin_provision": true, 00:19:08.077 "num_allocated_clusters": 0, 00:19:08.077 "snapshot": false, 00:19:08.077 "clone": false, 00:19:08.077 "esnap_clone": false 00:19:08.077 } 00:19:08.077 } 00:19:08.077 } 00:19:08.077 ]' 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:08.077 17:35:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2b406f57-8208-4bba-87aa-e7ab5b0739cb -c nvc0n1p0 --l2p_dram_limit 20 00:19:08.338 [2024-12-07 17:35:41.580740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.580786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:08.338 [2024-12-07 17:35:41.580799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:08.338 [2024-12-07 17:35:41.580808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.580855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.580864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.338 [2024-12-07 17:35:41.580870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:08.338 [2024-12-07 17:35:41.580878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.580891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:08.338 [2024-12-07 17:35:41.581517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:08.338 [2024-12-07 17:35:41.581539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.581547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.338 [2024-12-07 17:35:41.581554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:08.338 [2024-12-07 17:35:41.581569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.581592] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a8af504f-2f4d-464f-bb9c-2303c84f02bb 00:19:08.338 [2024-12-07 17:35:41.582849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.582869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:08.338 [2024-12-07 17:35:41.582881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:08.338 [2024-12-07 17:35:41.582888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.589675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.589702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.338 [2024-12-07 17:35:41.589712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.746 ms 00:19:08.338 [2024-12-07 17:35:41.589721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.589826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.589835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.338 [2024-12-07 17:35:41.589847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:08.338 [2024-12-07 17:35:41.589853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.589888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.589895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:08.338 [2024-12-07 17:35:41.589903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:08.338 [2024-12-07 17:35:41.589909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.589928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.338 [2024-12-07 17:35:41.593205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.593316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.338 [2024-12-07 17:35:41.593331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:19:08.338 [2024-12-07 17:35:41.593343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.593374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.593382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:08.338 [2024-12-07 17:35:41.593389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:08.338 [2024-12-07 17:35:41.593397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-12-07 17:35:41.593416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:08.338 [2024-12-07 17:35:41.593535] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:08.338 [2024-12-07 17:35:41.593545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:08.338 [2024-12-07 17:35:41.593555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:08.338 [2024-12-07 17:35:41.593573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:08.338 [2024-12-07 17:35:41.593582] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:08.338 [2024-12-07 17:35:41.593588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:08.338 [2024-12-07 17:35:41.593596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:08.338 [2024-12-07 17:35:41.593602] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:08.338 [2024-12-07 17:35:41.593610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:08.338 [2024-12-07 17:35:41.593618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.338 [2024-12-07 17:35:41.593625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:08.338 [2024-12-07 17:35:41.593632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:08.338 [2024-12-07 17:35:41.593639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.339 [2024-12-07 17:35:41.593703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.339 [2024-12-07 17:35:41.593711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:08.339 [2024-12-07 17:35:41.593717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:08.339 [2024-12-07 17:35:41.593725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.339 [2024-12-07 17:35:41.593794] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:08.339 [2024-12-07 17:35:41.593805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:08.339 [2024-12-07 17:35:41.593812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:08.339 [2024-12-07 17:35:41.593832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:08.339 [2024-12-07 17:35:41.593849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.339 [2024-12-07 17:35:41.593860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:08.339 [2024-12-07 17:35:41.593874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:08.339 [2024-12-07 17:35:41.593880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.339 [2024-12-07 17:35:41.593889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:08.339 [2024-12-07 17:35:41.593895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:08.339 [2024-12-07 17:35:41.593902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:08.339 [2024-12-07 17:35:41.593914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:08.339 [2024-12-07 17:35:41.593931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:08.339 [2024-12-07 17:35:41.593949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:08.339 [2024-12-07 17:35:41.593966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.339 [2024-12-07 17:35:41.593977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:08.339 [2024-12-07 17:35:41.593993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:08.339 [2024-12-07 17:35:41.593998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.339 [2024-12-07 17:35:41.594006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:08.339 [2024-12-07 17:35:41.594011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:08.339 [2024-12-07 17:35:41.594017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.339 [2024-12-07 17:35:41.594023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:08.339 [2024-12-07 17:35:41.594029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:08.339 [2024-12-07 17:35:41.594034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.339 [2024-12-07 17:35:41.594042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:08.339 [2024-12-07 17:35:41.594047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:08.339 [2024-12-07 17:35:41.594054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.594059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:08.339 [2024-12-07 17:35:41.594065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:08.339 [2024-12-07 17:35:41.594070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.594077] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:08.339 [2024-12-07 17:35:41.594083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:08.339 [2024-12-07 17:35:41.594091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.339 [2024-12-07 17:35:41.594097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.339 [2024-12-07 17:35:41.594107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:08.339 [2024-12-07 17:35:41.594113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:08.339 [2024-12-07 17:35:41.594119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:08.339 [2024-12-07 17:35:41.594125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:08.339 [2024-12-07 17:35:41.594131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:08.339 [2024-12-07 17:35:41.594136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:08.339 [2024-12-07 17:35:41.594145] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:08.339 [2024-12-07 17:35:41.594152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:08.339 [2024-12-07 17:35:41.594165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:08.339 [2024-12-07 17:35:41.594172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:08.339 [2024-12-07 17:35:41.594178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:08.339 [2024-12-07 17:35:41.594184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:08.339 [2024-12-07 17:35:41.594190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:08.339 [2024-12-07 17:35:41.594197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:08.339 [2024-12-07 17:35:41.594202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:08.339 [2024-12-07 17:35:41.594211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:08.339 [2024-12-07 17:35:41.594217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:08.339 [2024-12-07 17:35:41.594248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:08.339 [2024-12-07 17:35:41.594254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:08.339 [2024-12-07 17:35:41.594269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:08.339 [2024-12-07 17:35:41.594277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:08.339 [2024-12-07 17:35:41.594283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:08.339 [2024-12-07 17:35:41.594290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.339 [2024-12-07 17:35:41.594295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:08.339 [2024-12-07 17:35:41.594303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:08.339 [2024-12-07 17:35:41.594309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.339 [2024-12-07 17:35:41.594348] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:08.339 [2024-12-07 17:35:41.594356] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:12.543 [2024-12-07 17:35:45.335191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.335586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:12.543 [2024-12-07 17:35:45.335623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3740.818 ms 00:19:12.543 [2024-12-07 17:35:45.335634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.373309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.373373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.543 [2024-12-07 17:35:45.373391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.405 ms 00:19:12.543 [2024-12-07 17:35:45.373400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.373584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.373598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.543 [2024-12-07 17:35:45.373614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:12.543 [2024-12-07 17:35:45.373624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.425433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.425495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.543 [2024-12-07 17:35:45.425513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.766 ms 00:19:12.543 [2024-12-07 17:35:45.425523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.425599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.425611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.543 [2024-12-07 17:35:45.425624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:12.543 [2024-12-07 17:35:45.425636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.426410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.426451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.543 [2024-12-07 17:35:45.426466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:19:12.543 [2024-12-07 17:35:45.426475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.426607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.426619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.543 [2024-12-07 17:35:45.426635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:12.543 [2024-12-07 17:35:45.426644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.444948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.445273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.543 [2024-12-07 17:35:45.445300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.280 ms 00:19:12.543 [2024-12-07 17:35:45.445320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.460452] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:12.543 [2024-12-07 17:35:45.469945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.470050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.543 [2024-12-07 17:35:45.470065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.519 ms 00:19:12.543 [2024-12-07 17:35:45.470076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.559304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.559517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:12.543 [2024-12-07 17:35:45.559541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.196 ms 00:19:12.543 [2024-12-07 17:35:45.559555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.543 [2024-12-07 17:35:45.559819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.543 [2024-12-07 17:35:45.559841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.543 [2024-12-07 17:35:45.559852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:19:12.543 [2024-12-07 17:35:45.559868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.585903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.585962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:12.544 [2024-12-07 17:35:45.585977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.982 ms 00:19:12.544 [2024-12-07 17:35:45.586009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.611032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.611086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:12.544 [2024-12-07 17:35:45.611101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.974 ms 00:19:12.544 [2024-12-07 17:35:45.611113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.611739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.611764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.544 [2024-12-07 17:35:45.611774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:19:12.544 [2024-12-07 17:35:45.611785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.697958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.698027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:12.544 [2024-12-07 17:35:45.698042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.134 ms 00:19:12.544 [2024-12-07 17:35:45.698054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.726880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.726936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:12.544 [2024-12-07 17:35:45.726953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.737 ms 00:19:12.544 [2024-12-07 17:35:45.726964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.752556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.752608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:12.544 [2024-12-07 17:35:45.752621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.530 ms 00:19:12.544 [2024-12-07 17:35:45.752632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.779304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.779358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.544 [2024-12-07 17:35:45.779371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.627 ms 00:19:12.544 [2024-12-07 17:35:45.779382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.779433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.779451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.544 [2024-12-07 17:35:45.779462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:12.544 [2024-12-07 17:35:45.779474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.779571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.544 [2024-12-07 17:35:45.779587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.544 [2024-12-07 17:35:45.779597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:12.544 [2024-12-07 17:35:45.779609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.544 [2024-12-07 17:35:45.781018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4199.678 ms, result 0 00:19:12.544 { 00:19:12.544 "name": "ftl0", 00:19:12.544 "uuid": "a8af504f-2f4d-464f-bb9c-2303c84f02bb" 00:19:12.544 } 00:19:12.544 17:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:12.544 17:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:12.544 17:35:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:12.844 17:35:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:12.844 [2024-12-07 17:35:46.125009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.844 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:12.844 Zero copy mechanism will not be used. 00:19:12.844 Running I/O for 4 seconds... 00:19:14.786 809.00 IOPS, 53.72 MiB/s [2024-12-07T17:35:49.544Z] 838.00 IOPS, 55.65 MiB/s [2024-12-07T17:35:50.482Z] 827.33 IOPS, 54.94 MiB/s [2024-12-07T17:35:50.482Z] 854.00 IOPS, 56.71 MiB/s 00:19:17.100 Latency(us) 00:19:17.100 [2024-12-07T17:35:50.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.100 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:17.100 ftl0 : 4.00 853.78 56.70 0.00 0.00 1233.68 214.25 7511.43 00:19:17.100 [2024-12-07T17:35:50.482Z] =================================================================================================================== 00:19:17.100 [2024-12-07T17:35:50.482Z] Total : 853.78 56.70 0.00 0.00 1233.68 214.25 7511.43 00:19:17.100 { 00:19:17.100 "results": [ 00:19:17.100 { 00:19:17.100 "job": "ftl0", 00:19:17.100 "core_mask": "0x1", 00:19:17.100 "workload": "randwrite", 00:19:17.100 "status": "finished", 00:19:17.100 "queue_depth": 1, 00:19:17.100 "io_size": 69632, 00:19:17.100 "runtime": 4.002192, 00:19:17.100 "iops": 853.782127394188, 00:19:17.100 "mibps": 56.6964693972703, 00:19:17.100 "io_failed": 0, 00:19:17.100 "io_timeout": 0, 00:19:17.100 "avg_latency_us": 1233.6796812318498, 00:19:17.100 "min_latency_us": 214.25230769230768, 00:19:17.100 "max_latency_us": 7511.433846153846 00:19:17.100 } 00:19:17.100 ], 00:19:17.100 "core_count": 1 00:19:17.100 } 00:19:17.100 [2024-12-07 17:35:50.136771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:17.100 17:35:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:17.100 [2024-12-07 17:35:50.250020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:17.100 Running I/O for 4 seconds... 00:19:18.985 5834.00 IOPS, 22.79 MiB/s [2024-12-07T17:35:53.311Z] 5262.00 IOPS, 20.55 MiB/s [2024-12-07T17:35:54.701Z] 4981.33 IOPS, 19.46 MiB/s [2024-12-07T17:35:54.701Z] 4867.00 IOPS, 19.01 MiB/s 00:19:21.319 Latency(us) 00:19:21.319 [2024-12-07T17:35:54.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.319 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:21.319 ftl0 : 4.03 4855.90 18.97 0.00 0.00 26255.05 360.76 48597.46 00:19:21.319 [2024-12-07T17:35:54.701Z] =================================================================================================================== 00:19:21.319 [2024-12-07T17:35:54.701Z] Total : 4855.90 18.97 0.00 0.00 26255.05 0.00 48597.46 00:19:21.319 { 00:19:21.319 "results": [ 00:19:21.319 { 00:19:21.319 "job": "ftl0", 00:19:21.319 "core_mask": "0x1", 00:19:21.319 "workload": "randwrite", 00:19:21.319 "status": "finished", 00:19:21.319 "queue_depth": 128, 00:19:21.319 "io_size": 4096, 00:19:21.319 "runtime": 4.03324, 00:19:21.319 "iops": 4855.897491842787, 00:19:21.319 "mibps": 18.968349577510885, 00:19:21.319 "io_failed": 0, 00:19:21.319 "io_timeout": 0, 00:19:21.319 "avg_latency_us": 26255.054696333533, 00:19:21.319 "min_latency_us": 360.76307692307694, 00:19:21.319 "max_latency_us": 48597.46461538462 00:19:21.319 } 00:19:21.319 ], 00:19:21.319 "core_count": 1 00:19:21.319 } 00:19:21.319 [2024-12-07 17:35:54.293669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:21.319 17:35:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:21.319 [2024-12-07 17:35:54.406959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:21.319 Running I/O for 4 seconds... 00:19:23.215 4353.00 IOPS, 17.00 MiB/s [2024-12-07T17:35:57.544Z] 4350.50 IOPS, 16.99 MiB/s [2024-12-07T17:35:58.490Z] 4324.00 IOPS, 16.89 MiB/s [2024-12-07T17:35:58.490Z] 4300.25 IOPS, 16.80 MiB/s 00:19:25.108 Latency(us) 00:19:25.108 [2024-12-07T17:35:58.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.108 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:25.108 Verification LBA range: start 0x0 length 0x1400000 00:19:25.108 ftl0 : 4.01 4318.50 16.87 0.00 0.00 29567.99 360.76 39119.95 00:19:25.108 [2024-12-07T17:35:58.490Z] =================================================================================================================== 00:19:25.108 [2024-12-07T17:35:58.490Z] Total : 4318.50 16.87 0.00 0.00 29567.99 0.00 39119.95 00:19:25.108 [2024-12-07 17:35:58.436225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:25.108 { 00:19:25.108 "results": [ 00:19:25.108 { 00:19:25.108 "job": "ftl0", 00:19:25.108 "core_mask": "0x1", 00:19:25.108 "workload": "verify", 00:19:25.108 "status": "finished", 00:19:25.108 "verify_range": { 00:19:25.108 "start": 0, 00:19:25.108 "length": 20971520 00:19:25.108 }, 00:19:25.108 "queue_depth": 128, 00:19:25.108 "io_size": 4096, 00:19:25.108 "runtime": 4.012503, 00:19:25.108 "iops": 4318.50144411107, 00:19:25.108 "mibps": 16.869146266058866, 00:19:25.108 "io_failed": 0, 00:19:25.108 "io_timeout": 0, 00:19:25.108 "avg_latency_us": 29567.985362596773, 00:19:25.108 "min_latency_us": 360.76307692307694, 00:19:25.108 "max_latency_us": 39119.95076923077 00:19:25.108 } 00:19:25.108 ], 00:19:25.108 "core_count": 1 00:19:25.108 } 00:19:25.108 17:35:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:25.369 [2024-12-07 17:35:58.655268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.369 [2024-12-07 17:35:58.655492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:25.369 [2024-12-07 17:35:58.655515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:25.369 [2024-12-07 17:35:58.655527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.369 [2024-12-07 17:35:58.655557] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.369 [2024-12-07 17:35:58.658577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.369 [2024-12-07 17:35:58.658728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:25.369 [2024-12-07 17:35:58.658752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:19:25.369 [2024-12-07 17:35:58.658761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.369 [2024-12-07 17:35:58.661517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.369 [2024-12-07 17:35:58.661556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:25.369 [2024-12-07 17:35:58.661600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:19:25.369 [2024-12-07 17:35:58.661609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.900515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.900569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:25.631 [2024-12-07 17:35:58.900591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 238.879 ms 00:19:25.631 [2024-12-07 17:35:58.900600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.906820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.906860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:25.631 [2024-12-07 17:35:58.906874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:19:25.631 [2024-12-07 17:35:58.906886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.933368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.933412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:25.631 [2024-12-07 17:35:58.933429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.418 ms 00:19:25.631 [2024-12-07 17:35:58.933437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.950193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.950372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:25.631 [2024-12-07 17:35:58.950398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.705 ms 00:19:25.631 [2024-12-07 17:35:58.950407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.950596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.950608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:25.631 [2024-12-07 17:35:58.950624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:25.631 [2024-12-07 17:35:58.950632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:58.976224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:58.976267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:25.631 [2024-12-07 17:35:58.976280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.570 ms 00:19:25.631 [2024-12-07 17:35:58.976288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.631 [2024-12-07 17:35:59.001404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.631 [2024-12-07 17:35:59.001583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:25.631 [2024-12-07 17:35:59.001608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.068 ms 00:19:25.631 [2024-12-07 17:35:59.001616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.894 [2024-12-07 17:35:59.025748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.894 [2024-12-07 17:35:59.025790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:25.894 [2024-12-07 17:35:59.025804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:19:25.894 [2024-12-07 17:35:59.025811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.894 [2024-12-07 17:35:59.049834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.894 [2024-12-07 17:35:59.049876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:25.894 [2024-12-07 17:35:59.049893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.935 ms 00:19:25.894 [2024-12-07 17:35:59.049900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.894 [2024-12-07 17:35:59.049953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:25.894 [2024-12-07 17:35:59.049969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:25.894 [2024-12-07 17:35:59.050110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:25.895 [2024-12-07 17:35:59.050894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:25.895 [2024-12-07 17:35:59.050904] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a8af504f-2f4d-464f-bb9c-2303c84f02bb 00:19:25.895 [2024-12-07 17:35:59.050915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:25.895 [2024-12-07 17:35:59.050924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:25.896 [2024-12-07 17:35:59.050932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:25.896 [2024-12-07 17:35:59.050941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:25.896 [2024-12-07 17:35:59.050949] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:25.896 [2024-12-07 17:35:59.050958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:25.896 [2024-12-07 17:35:59.050966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:25.896 [2024-12-07 17:35:59.051005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:25.896 [2024-12-07 17:35:59.051012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:25.896 [2024-12-07 17:35:59.051023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-07 17:35:59.051031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:25.896 [2024-12-07 17:35:59.051042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:19:25.896 [2024-12-07 17:35:59.051050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.064859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-07 17:35:59.065042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:25.896 [2024-12-07 17:35:59.065067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.767 ms 00:19:25.896 [2024-12-07 17:35:59.065076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.065476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.896 [2024-12-07 17:35:59.065486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:25.896 [2024-12-07 17:35:59.065497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:19:25.896 [2024-12-07 17:35:59.065505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.104178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.104347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.896 [2024-12-07 17:35:59.104374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.104383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.104450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.104459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.896 [2024-12-07 17:35:59.104469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.104477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.104580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.104591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.896 [2024-12-07 17:35:59.104602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.104610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.104628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.104637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.896 [2024-12-07 17:35:59.104646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.104654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.188151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.188208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.896 [2024-12-07 17:35:59.188226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.188234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.896 [2024-12-07 17:35:59.256104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.896 [2024-12-07 17:35:59.256220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.896 [2024-12-07 17:35:59.256315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.896 [2024-12-07 17:35:59.256449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.896 [2024-12-07 17:35:59.256512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.896 [2024-12-07 17:35:59.256583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.896 [2024-12-07 17:35:59.256659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.896 [2024-12-07 17:35:59.256670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.896 [2024-12-07 17:35:59.256678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.896 [2024-12-07 17:35:59.256818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 601.506 ms, result 0 00:19:25.896 true 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75958 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75958 ']' 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75958 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75958 00:19:26.158 killing process with pid 75958 00:19:26.158 Received shutdown signal, test time was about 4.000000 seconds 00:19:26.158 00:19:26.158 Latency(us) 00:19:26.158 [2024-12-07T17:35:59.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.158 [2024-12-07T17:35:59.540Z] =================================================================================================================== 00:19:26.158 [2024-12-07T17:35:59.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75958' 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75958 00:19:26.158 17:35:59 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75958 00:19:27.101 Remove shared memory files 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:27.101 17:36:00 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:27.101 ************************************ 00:19:27.102 END TEST ftl_bdevperf 00:19:27.102 ************************************ 00:19:27.102 00:19:27.102 real 0m22.624s 00:19:27.102 user 0m25.134s 00:19:27.102 sys 0m1.042s 00:19:27.102 17:36:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.102 17:36:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:27.102 17:36:00 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:27.102 17:36:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:27.102 17:36:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.102 17:36:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:27.102 ************************************ 00:19:27.102 START TEST ftl_trim 00:19:27.102 ************************************ 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:27.102 * Looking for test storage... 00:19:27.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.102 17:36:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.102 --rc genhtml_branch_coverage=1 00:19:27.102 --rc genhtml_function_coverage=1 00:19:27.102 --rc genhtml_legend=1 00:19:27.102 --rc geninfo_all_blocks=1 00:19:27.102 --rc geninfo_unexecuted_blocks=1 00:19:27.102 00:19:27.102 ' 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.102 --rc genhtml_branch_coverage=1 00:19:27.102 --rc genhtml_function_coverage=1 00:19:27.102 --rc genhtml_legend=1 00:19:27.102 --rc geninfo_all_blocks=1 00:19:27.102 --rc geninfo_unexecuted_blocks=1 00:19:27.102 00:19:27.102 ' 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.102 --rc genhtml_branch_coverage=1 00:19:27.102 --rc genhtml_function_coverage=1 00:19:27.102 --rc genhtml_legend=1 00:19:27.102 --rc geninfo_all_blocks=1 00:19:27.102 --rc geninfo_unexecuted_blocks=1 00:19:27.102 00:19:27.102 ' 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.102 --rc genhtml_branch_coverage=1 00:19:27.102 --rc genhtml_function_coverage=1 00:19:27.102 --rc genhtml_legend=1 00:19:27.102 --rc geninfo_all_blocks=1 00:19:27.102 --rc geninfo_unexecuted_blocks=1 00:19:27.102 00:19:27.102 ' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76311 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76311 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76311 ']' 00:19:27.102 17:36:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.102 17:36:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:27.364 [2024-12-07 17:36:00.516288] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:27.364 [2024-12-07 17:36:00.516672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76311 ] 00:19:27.364 [2024-12-07 17:36:00.680741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:27.625 [2024-12-07 17:36:00.805948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.625 [2024-12-07 17:36:00.806266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.625 [2024-12-07 17:36:00.806376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:28.566 17:36:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:28.566 17:36:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:28.826 { 00:19:28.826 "name": "nvme0n1", 00:19:28.826 "aliases": [ 00:19:28.826 "6be7d134-9a4d-4a47-af99-591b6b1ddd30" 00:19:28.826 ], 00:19:28.826 "product_name": "NVMe disk", 00:19:28.826 "block_size": 4096, 00:19:28.826 "num_blocks": 1310720, 00:19:28.826 "uuid": "6be7d134-9a4d-4a47-af99-591b6b1ddd30", 00:19:28.826 "numa_id": -1, 00:19:28.826 "assigned_rate_limits": { 00:19:28.826 "rw_ios_per_sec": 0, 00:19:28.826 "rw_mbytes_per_sec": 0, 00:19:28.826 "r_mbytes_per_sec": 0, 00:19:28.826 "w_mbytes_per_sec": 0 00:19:28.826 }, 00:19:28.826 "claimed": true, 00:19:28.826 "claim_type": "read_many_write_one", 00:19:28.826 "zoned": false, 00:19:28.826 "supported_io_types": { 00:19:28.826 "read": true, 00:19:28.826 "write": true, 00:19:28.826 "unmap": true, 00:19:28.826 "flush": true, 00:19:28.826 "reset": true, 00:19:28.826 "nvme_admin": true, 00:19:28.826 "nvme_io": true, 00:19:28.826 "nvme_io_md": false, 00:19:28.826 "write_zeroes": true, 00:19:28.826 "zcopy": false, 00:19:28.826 "get_zone_info": false, 00:19:28.826 "zone_management": false, 00:19:28.826 "zone_append": false, 00:19:28.826 "compare": true, 00:19:28.826 "compare_and_write": false, 00:19:28.826 "abort": true, 00:19:28.826 "seek_hole": false, 00:19:28.826 "seek_data": false, 00:19:28.826 "copy": true, 00:19:28.826 "nvme_iov_md": false 00:19:28.826 }, 00:19:28.826 "driver_specific": { 00:19:28.826 "nvme": [ 00:19:28.826 { 00:19:28.826 "pci_address": "0000:00:11.0", 00:19:28.826 "trid": { 00:19:28.826 "trtype": "PCIe", 00:19:28.826 "traddr": "0000:00:11.0" 00:19:28.826 }, 00:19:28.826 "ctrlr_data": { 00:19:28.826 "cntlid": 0, 00:19:28.826 "vendor_id": "0x1b36", 00:19:28.826 "model_number": "QEMU NVMe Ctrl", 00:19:28.826 "serial_number": "12341", 00:19:28.826 "firmware_revision": "8.0.0", 00:19:28.826 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:28.826 "oacs": { 00:19:28.826 "security": 0, 00:19:28.826 "format": 1, 00:19:28.826 "firmware": 0, 00:19:28.826 "ns_manage": 1 00:19:28.826 }, 00:19:28.826 "multi_ctrlr": false, 00:19:28.826 "ana_reporting": false 00:19:28.826 }, 00:19:28.826 "vs": { 00:19:28.826 "nvme_version": "1.4" 00:19:28.826 }, 00:19:28.826 "ns_data": { 00:19:28.826 "id": 1, 00:19:28.826 "can_share": false 00:19:28.826 } 00:19:28.826 } 00:19:28.826 ], 00:19:28.826 "mp_policy": "active_passive" 00:19:28.826 } 00:19:28.826 } 00:19:28.826 ]' 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:28.826 17:36:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:28.826 17:36:02 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:28.826 17:36:02 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:28.826 17:36:02 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:28.826 17:36:02 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:28.826 17:36:02 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:29.088 17:36:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=19a3c42e-94ff-47f6-b31e-5a916da8f69c 00:19:29.088 17:36:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:29.088 17:36:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19a3c42e-94ff-47f6-b31e-5a916da8f69c 00:19:29.348 17:36:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:29.609 17:36:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=23449283-1523-4717-8c78-fe22e335e1a7 00:19:29.609 17:36:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 23449283-1523-4717-8c78-fe22e335e1a7 00:19:29.869 17:36:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:29.870 17:36:03 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:29.870 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:29.870 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:29.870 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:29.870 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:29.870 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.129 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.129 { 00:19:30.129 "name": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:30.129 "aliases": [ 00:19:30.129 "lvs/nvme0n1p0" 00:19:30.129 ], 00:19:30.129 "product_name": "Logical Volume", 00:19:30.129 "block_size": 4096, 00:19:30.129 "num_blocks": 26476544, 00:19:30.129 "uuid": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:30.129 "assigned_rate_limits": { 00:19:30.129 "rw_ios_per_sec": 0, 00:19:30.129 "rw_mbytes_per_sec": 0, 00:19:30.129 "r_mbytes_per_sec": 0, 00:19:30.129 "w_mbytes_per_sec": 0 00:19:30.129 }, 00:19:30.129 "claimed": false, 00:19:30.129 "zoned": false, 00:19:30.130 "supported_io_types": { 00:19:30.130 "read": true, 00:19:30.130 "write": true, 00:19:30.130 "unmap": true, 00:19:30.130 "flush": false, 00:19:30.130 "reset": true, 00:19:30.130 "nvme_admin": false, 00:19:30.130 "nvme_io": false, 00:19:30.130 "nvme_io_md": false, 00:19:30.130 "write_zeroes": true, 00:19:30.130 "zcopy": false, 00:19:30.130 "get_zone_info": false, 00:19:30.130 "zone_management": false, 00:19:30.130 "zone_append": false, 00:19:30.130 "compare": false, 00:19:30.130 "compare_and_write": false, 00:19:30.130 "abort": false, 00:19:30.130 "seek_hole": true, 00:19:30.130 "seek_data": true, 00:19:30.130 "copy": false, 00:19:30.130 "nvme_iov_md": false 00:19:30.130 }, 00:19:30.130 "driver_specific": { 00:19:30.130 "lvol": { 00:19:30.130 "lvol_store_uuid": "23449283-1523-4717-8c78-fe22e335e1a7", 00:19:30.130 "base_bdev": "nvme0n1", 00:19:30.130 "thin_provision": true, 00:19:30.130 "num_allocated_clusters": 0, 00:19:30.130 "snapshot": false, 00:19:30.130 "clone": false, 00:19:30.130 "esnap_clone": false 00:19:30.130 } 00:19:30.130 } 00:19:30.130 } 00:19:30.130 ]' 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.130 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.130 17:36:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:30.130 17:36:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:30.130 17:36:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:30.388 17:36:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:30.388 17:36:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:30.388 17:36:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.388 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.388 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.388 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:30.388 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:30.388 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:30.646 { 00:19:30.646 "name": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:30.646 "aliases": [ 00:19:30.646 "lvs/nvme0n1p0" 00:19:30.646 ], 00:19:30.646 "product_name": "Logical Volume", 00:19:30.646 "block_size": 4096, 00:19:30.646 "num_blocks": 26476544, 00:19:30.646 "uuid": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:30.646 "assigned_rate_limits": { 00:19:30.646 "rw_ios_per_sec": 0, 00:19:30.646 "rw_mbytes_per_sec": 0, 00:19:30.646 "r_mbytes_per_sec": 0, 00:19:30.646 "w_mbytes_per_sec": 0 00:19:30.646 }, 00:19:30.646 "claimed": false, 00:19:30.646 "zoned": false, 00:19:30.646 "supported_io_types": { 00:19:30.646 "read": true, 00:19:30.646 "write": true, 00:19:30.646 "unmap": true, 00:19:30.646 "flush": false, 00:19:30.646 "reset": true, 00:19:30.646 "nvme_admin": false, 00:19:30.646 "nvme_io": false, 00:19:30.646 "nvme_io_md": false, 00:19:30.646 "write_zeroes": true, 00:19:30.646 "zcopy": false, 00:19:30.646 "get_zone_info": false, 00:19:30.646 "zone_management": false, 00:19:30.646 "zone_append": false, 00:19:30.646 "compare": false, 00:19:30.646 "compare_and_write": false, 00:19:30.646 "abort": false, 00:19:30.646 "seek_hole": true, 00:19:30.646 "seek_data": true, 00:19:30.646 "copy": false, 00:19:30.646 "nvme_iov_md": false 00:19:30.646 }, 00:19:30.646 "driver_specific": { 00:19:30.646 "lvol": { 00:19:30.646 "lvol_store_uuid": "23449283-1523-4717-8c78-fe22e335e1a7", 00:19:30.646 "base_bdev": "nvme0n1", 00:19:30.646 "thin_provision": true, 00:19:30.646 "num_allocated_clusters": 0, 00:19:30.646 "snapshot": false, 00:19:30.646 "clone": false, 00:19:30.646 "esnap_clone": false 00:19:30.646 } 00:19:30.646 } 00:19:30.646 } 00:19:30.646 ]' 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:30.646 17:36:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:30.646 17:36:03 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:30.646 17:36:03 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:30.904 17:36:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:30.904 17:36:04 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:30.904 17:36:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.904 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:30.904 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:30.904 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:30.904 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:30.904 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:31.163 { 00:19:31.163 "name": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:31.163 "aliases": [ 00:19:31.163 "lvs/nvme0n1p0" 00:19:31.163 ], 00:19:31.163 "product_name": "Logical Volume", 00:19:31.163 "block_size": 4096, 00:19:31.163 "num_blocks": 26476544, 00:19:31.163 "uuid": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:31.163 "assigned_rate_limits": { 00:19:31.163 "rw_ios_per_sec": 0, 00:19:31.163 "rw_mbytes_per_sec": 0, 00:19:31.163 "r_mbytes_per_sec": 0, 00:19:31.163 "w_mbytes_per_sec": 0 00:19:31.163 }, 00:19:31.163 "claimed": false, 00:19:31.163 "zoned": false, 00:19:31.163 "supported_io_types": { 00:19:31.163 "read": true, 00:19:31.163 "write": true, 00:19:31.163 "unmap": true, 00:19:31.163 "flush": false, 00:19:31.163 "reset": true, 00:19:31.163 "nvme_admin": false, 00:19:31.163 "nvme_io": false, 00:19:31.163 "nvme_io_md": false, 00:19:31.163 "write_zeroes": true, 00:19:31.163 "zcopy": false, 00:19:31.163 "get_zone_info": false, 00:19:31.163 "zone_management": false, 00:19:31.163 "zone_append": false, 00:19:31.163 "compare": false, 00:19:31.163 "compare_and_write": false, 00:19:31.163 "abort": false, 00:19:31.163 "seek_hole": true, 00:19:31.163 "seek_data": true, 00:19:31.163 "copy": false, 00:19:31.163 "nvme_iov_md": false 00:19:31.163 }, 00:19:31.163 "driver_specific": { 00:19:31.163 "lvol": { 00:19:31.163 "lvol_store_uuid": "23449283-1523-4717-8c78-fe22e335e1a7", 00:19:31.163 "base_bdev": "nvme0n1", 00:19:31.163 "thin_provision": true, 00:19:31.163 "num_allocated_clusters": 0, 00:19:31.163 "snapshot": false, 00:19:31.163 "clone": false, 00:19:31.163 "esnap_clone": false 00:19:31.163 } 00:19:31.163 } 00:19:31.163 } 00:19:31.163 ]' 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:31.163 17:36:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:31.163 17:36:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:31.163 17:36:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ac9cfc9-8254-428d-a17b-3d9d1cc7357e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:31.163 [2024-12-07 17:36:04.539029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.163 [2024-12-07 17:36:04.539069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.163 [2024-12-07 17:36:04.539084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.163 [2024-12-07 17:36:04.539091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.163 [2024-12-07 17:36:04.541569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.163 [2024-12-07 17:36:04.541602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.163 [2024-12-07 17:36:04.541612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:19:31.163 [2024-12-07 17:36:04.541618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.163 [2024-12-07 17:36:04.541715] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.163 [2024-12-07 17:36:04.542261] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.163 [2024-12-07 17:36:04.542367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.163 [2024-12-07 17:36:04.542376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.163 [2024-12-07 17:36:04.542385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:19:31.163 [2024-12-07 17:36:04.542391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.163 [2024-12-07 17:36:04.542471] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84040001-4357-46bc-af25-3c5f953812bb 00:19:31.423 [2024-12-07 17:36:04.543720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.423 [2024-12-07 17:36:04.543751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:31.423 [2024-12-07 17:36:04.543760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:31.423 [2024-12-07 17:36:04.543769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.550580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.550607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.424 [2024-12-07 17:36:04.550616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.742 ms 00:19:31.424 [2024-12-07 17:36:04.550625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.550731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.550742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.424 [2024-12-07 17:36:04.550748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:31.424 [2024-12-07 17:36:04.550759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.550787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.550796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.424 [2024-12-07 17:36:04.550803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:31.424 [2024-12-07 17:36:04.550814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.550843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.424 [2024-12-07 17:36:04.554069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.554096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.424 [2024-12-07 17:36:04.554107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:19:31.424 [2024-12-07 17:36:04.554114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.554166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.554187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.424 [2024-12-07 17:36:04.554196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.424 [2024-12-07 17:36:04.554203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.554232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:31.424 [2024-12-07 17:36:04.554347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.424 [2024-12-07 17:36:04.554360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.424 [2024-12-07 17:36:04.554370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.424 [2024-12-07 17:36:04.554380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.424 [2024-12-07 17:36:04.554401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.424 [2024-12-07 17:36:04.554410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.424 [2024-12-07 17:36:04.554417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.424 [2024-12-07 17:36:04.554426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.554432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.424 [2024-12-07 17:36:04.554440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:31.424 [2024-12-07 17:36:04.554446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.554536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.424 [2024-12-07 17:36:04.554543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.424 [2024-12-07 17:36:04.554551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:31.424 [2024-12-07 17:36:04.554557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.424 [2024-12-07 17:36:04.554658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.424 [2024-12-07 17:36:04.554667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.424 [2024-12-07 17:36:04.554675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.424 [2024-12-07 17:36:04.554695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.424 [2024-12-07 17:36:04.554714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.424 [2024-12-07 17:36:04.554726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.424 [2024-12-07 17:36:04.554732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.424 [2024-12-07 17:36:04.554740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.424 [2024-12-07 17:36:04.554745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.424 [2024-12-07 17:36:04.554751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:31.424 [2024-12-07 17:36:04.554757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.424 [2024-12-07 17:36:04.554771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.424 [2024-12-07 17:36:04.554791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.424 [2024-12-07 17:36:04.554808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.424 [2024-12-07 17:36:04.554828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.424 [2024-12-07 17:36:04.554845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.424 [2024-12-07 17:36:04.554867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.424 [2024-12-07 17:36:04.554878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.424 [2024-12-07 17:36:04.554883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:31.424 [2024-12-07 17:36:04.554890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.424 [2024-12-07 17:36:04.554895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.424 [2024-12-07 17:36:04.554902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:31.424 [2024-12-07 17:36:04.554908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.424 [2024-12-07 17:36:04.554919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:31.424 [2024-12-07 17:36:04.554926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554932] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.424 [2024-12-07 17:36:04.554940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.424 [2024-12-07 17:36:04.554945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.424 [2024-12-07 17:36:04.554952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.424 [2024-12-07 17:36:04.554959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.424 [2024-12-07 17:36:04.554967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.424 [2024-12-07 17:36:04.554973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.424 [2024-12-07 17:36:04.554995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.424 [2024-12-07 17:36:04.555001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.424 [2024-12-07 17:36:04.555008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.424 [2024-12-07 17:36:04.555015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.424 [2024-12-07 17:36:04.555024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.424 [2024-12-07 17:36:04.555033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.424 [2024-12-07 17:36:04.555041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:31.424 [2024-12-07 17:36:04.555048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.424 [2024-12-07 17:36:04.555056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:31.424 [2024-12-07 17:36:04.555061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:31.424 [2024-12-07 17:36:04.555068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:31.424 [2024-12-07 17:36:04.555073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:31.425 [2024-12-07 17:36:04.555082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:31.425 [2024-12-07 17:36:04.555088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:31.425 [2024-12-07 17:36:04.555098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:31.425 [2024-12-07 17:36:04.555130] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.425 [2024-12-07 17:36:04.555140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.425 [2024-12-07 17:36:04.555154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.425 [2024-12-07 17:36:04.555160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.425 [2024-12-07 17:36:04.555167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.425 [2024-12-07 17:36:04.555174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.425 [2024-12-07 17:36:04.555182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.425 [2024-12-07 17:36:04.555188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:19:31.425 [2024-12-07 17:36:04.555196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.425 [2024-12-07 17:36:04.555278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:31.425 [2024-12-07 17:36:04.555290] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:33.951 [2024-12-07 17:36:07.124164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.124233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:33.951 [2024-12-07 17:36:07.124249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2568.875 ms 00:19:33.951 [2024-12-07 17:36:07.124260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.152684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.152733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.951 [2024-12-07 17:36:07.152746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.190 ms 00:19:33.951 [2024-12-07 17:36:07.152757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.152893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.152921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.951 [2024-12-07 17:36:07.152943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:33.951 [2024-12-07 17:36:07.152957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.196130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.196314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.951 [2024-12-07 17:36:07.196335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.119 ms 00:19:33.951 [2024-12-07 17:36:07.196346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.196442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.196455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.951 [2024-12-07 17:36:07.196464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:33.951 [2024-12-07 17:36:07.196474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.196890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.196912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.951 [2024-12-07 17:36:07.196921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:19:33.951 [2024-12-07 17:36:07.196930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.197067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.197079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.951 [2024-12-07 17:36:07.197102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:33.951 [2024-12-07 17:36:07.197115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.213253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.213290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.951 [2024-12-07 17:36:07.213301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.110 ms 00:19:33.951 [2024-12-07 17:36:07.213311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.225421] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:33.951 [2024-12-07 17:36:07.242875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.242911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.951 [2024-12-07 17:36:07.242924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.457 ms 00:19:33.951 [2024-12-07 17:36:07.242932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.311241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.951 [2024-12-07 17:36:07.311282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:33.951 [2024-12-07 17:36:07.311296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.223 ms 00:19:33.951 [2024-12-07 17:36:07.311304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.951 [2024-12-07 17:36:07.311521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.952 [2024-12-07 17:36:07.311533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.952 [2024-12-07 17:36:07.311547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:33.952 [2024-12-07 17:36:07.311554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.334659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.334833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:34.211 [2024-12-07 17:36:07.334855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:19:34.211 [2024-12-07 17:36:07.334863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.358273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.358306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:34.211 [2024-12-07 17:36:07.358319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.083 ms 00:19:34.211 [2024-12-07 17:36:07.358327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.358918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.358933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:34.211 [2024-12-07 17:36:07.358944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:19:34.211 [2024-12-07 17:36:07.358952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.433105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.433220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:34.211 [2024-12-07 17:36:07.433242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.103 ms 00:19:34.211 [2024-12-07 17:36:07.433250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.458163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.458197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:34.211 [2024-12-07 17:36:07.458210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.820 ms 00:19:34.211 [2024-12-07 17:36:07.458219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.481183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.481215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:34.211 [2024-12-07 17:36:07.481228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:19:34.211 [2024-12-07 17:36:07.481235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.504744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.504875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:34.211 [2024-12-07 17:36:07.504895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.433 ms 00:19:34.211 [2024-12-07 17:36:07.504902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.504973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.505002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:34.211 [2024-12-07 17:36:07.505015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:34.211 [2024-12-07 17:36:07.505022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.505097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.211 [2024-12-07 17:36:07.505106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:34.211 [2024-12-07 17:36:07.505115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:34.211 [2024-12-07 17:36:07.505123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.211 [2024-12-07 17:36:07.506012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.211 [2024-12-07 17:36:07.508942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2966.672 ms, result 0 00:19:34.211 { 00:19:34.211 "name": "ftl0", 00:19:34.211 "uuid": "84040001-4357-46bc-af25-3c5f953812bb" 00:19:34.211 } 00:19:34.211 [2024-12-07 17:36:07.510503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.211 17:36:07 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:34.211 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:34.211 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.211 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:34.211 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.211 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.212 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:34.471 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:34.729 [ 00:19:34.729 { 00:19:34.729 "name": "ftl0", 00:19:34.729 "aliases": [ 00:19:34.729 "84040001-4357-46bc-af25-3c5f953812bb" 00:19:34.729 ], 00:19:34.729 "product_name": "FTL disk", 00:19:34.729 "block_size": 4096, 00:19:34.730 "num_blocks": 23592960, 00:19:34.730 "uuid": "84040001-4357-46bc-af25-3c5f953812bb", 00:19:34.730 "assigned_rate_limits": { 00:19:34.730 "rw_ios_per_sec": 0, 00:19:34.730 "rw_mbytes_per_sec": 0, 00:19:34.730 "r_mbytes_per_sec": 0, 00:19:34.730 "w_mbytes_per_sec": 0 00:19:34.730 }, 00:19:34.730 "claimed": false, 00:19:34.730 "zoned": false, 00:19:34.730 "supported_io_types": { 00:19:34.730 "read": true, 00:19:34.730 "write": true, 00:19:34.730 "unmap": true, 00:19:34.730 "flush": true, 00:19:34.730 "reset": false, 00:19:34.730 "nvme_admin": false, 00:19:34.730 "nvme_io": false, 00:19:34.730 "nvme_io_md": false, 00:19:34.730 "write_zeroes": true, 00:19:34.730 "zcopy": false, 00:19:34.730 "get_zone_info": false, 00:19:34.730 "zone_management": false, 00:19:34.730 "zone_append": false, 00:19:34.730 "compare": false, 00:19:34.730 "compare_and_write": false, 00:19:34.730 "abort": false, 00:19:34.730 "seek_hole": false, 00:19:34.730 "seek_data": false, 00:19:34.730 "copy": false, 00:19:34.730 "nvme_iov_md": false 00:19:34.730 }, 00:19:34.730 "driver_specific": { 00:19:34.730 "ftl": { 00:19:34.730 "base_bdev": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:34.730 "cache": "nvc0n1p0" 00:19:34.730 } 00:19:34.730 } 00:19:34.730 } 00:19:34.730 ] 00:19:34.730 17:36:07 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:34.730 17:36:07 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:34.730 17:36:07 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:34.989 17:36:08 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:34.989 17:36:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:34.989 17:36:08 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:34.989 { 00:19:34.989 "name": "ftl0", 00:19:34.989 "aliases": [ 00:19:34.989 "84040001-4357-46bc-af25-3c5f953812bb" 00:19:34.989 ], 00:19:34.989 "product_name": "FTL disk", 00:19:34.989 "block_size": 4096, 00:19:34.989 "num_blocks": 23592960, 00:19:34.989 "uuid": "84040001-4357-46bc-af25-3c5f953812bb", 00:19:34.989 "assigned_rate_limits": { 00:19:34.989 "rw_ios_per_sec": 0, 00:19:34.989 "rw_mbytes_per_sec": 0, 00:19:34.989 "r_mbytes_per_sec": 0, 00:19:34.989 "w_mbytes_per_sec": 0 00:19:34.989 }, 00:19:34.989 "claimed": false, 00:19:34.989 "zoned": false, 00:19:34.989 "supported_io_types": { 00:19:34.989 "read": true, 00:19:34.989 "write": true, 00:19:34.989 "unmap": true, 00:19:34.989 "flush": true, 00:19:34.989 "reset": false, 00:19:34.989 "nvme_admin": false, 00:19:34.989 "nvme_io": false, 00:19:34.989 "nvme_io_md": false, 00:19:34.989 "write_zeroes": true, 00:19:34.989 "zcopy": false, 00:19:34.989 "get_zone_info": false, 00:19:34.989 "zone_management": false, 00:19:34.989 "zone_append": false, 00:19:34.989 "compare": false, 00:19:34.989 "compare_and_write": false, 00:19:34.989 "abort": false, 00:19:34.989 "seek_hole": false, 00:19:34.989 "seek_data": false, 00:19:34.989 "copy": false, 00:19:34.989 "nvme_iov_md": false 00:19:34.989 }, 00:19:34.989 "driver_specific": { 00:19:34.989 "ftl": { 00:19:34.989 "base_bdev": "4ac9cfc9-8254-428d-a17b-3d9d1cc7357e", 00:19:34.989 "cache": "nvc0n1p0" 00:19:34.989 } 00:19:34.989 } 00:19:34.989 } 00:19:34.989 ]' 00:19:34.989 17:36:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:35.248 17:36:08 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:35.248 17:36:08 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:35.249 [2024-12-07 17:36:08.513303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.513338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.249 [2024-12-07 17:36:08.513351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.249 [2024-12-07 17:36:08.513361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.513388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:35.249 [2024-12-07 17:36:08.515532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.515649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.249 [2024-12-07 17:36:08.515669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:19:35.249 [2024-12-07 17:36:08.515675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.516134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.516143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.249 [2024-12-07 17:36:08.516152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:19:35.249 [2024-12-07 17:36:08.516158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.518931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.519021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.249 [2024-12-07 17:36:08.519034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.743 ms 00:19:35.249 [2024-12-07 17:36:08.519041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.524343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.524366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:35.249 [2024-12-07 17:36:08.524376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.256 ms 00:19:35.249 [2024-12-07 17:36:08.524382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.542489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.542512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.249 [2024-12-07 17:36:08.542524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.052 ms 00:19:35.249 [2024-12-07 17:36:08.542530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.555122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.555145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.249 [2024-12-07 17:36:08.555156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.541 ms 00:19:35.249 [2024-12-07 17:36:08.555165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.555321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.555329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.249 [2024-12-07 17:36:08.555337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:35.249 [2024-12-07 17:36:08.555343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.573299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.573322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:35.249 [2024-12-07 17:36:08.573331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:19:35.249 [2024-12-07 17:36:08.573337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.590766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.590787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:35.249 [2024-12-07 17:36:08.590799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.367 ms 00:19:35.249 [2024-12-07 17:36:08.590805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.249 [2024-12-07 17:36:08.608174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.249 [2024-12-07 17:36:08.608196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.249 [2024-12-07 17:36:08.608205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.319 ms 00:19:35.249 [2024-12-07 17:36:08.608211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.510 [2024-12-07 17:36:08.625629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.510 [2024-12-07 17:36:08.625650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.510 [2024-12-07 17:36:08.625659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.309 ms 00:19:35.510 [2024-12-07 17:36:08.625665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.510 [2024-12-07 17:36:08.625711] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.510 [2024-12-07 17:36:08.625723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.625997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.510 [2024-12-07 17:36:08.626352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.511 [2024-12-07 17:36:08.626431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.511 [2024-12-07 17:36:08.626440] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:19:35.511 [2024-12-07 17:36:08.626446] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.511 [2024-12-07 17:36:08.626454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.511 [2024-12-07 17:36:08.626459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.511 [2024-12-07 17:36:08.626468] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.511 [2024-12-07 17:36:08.626474] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.511 [2024-12-07 17:36:08.626482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.511 [2024-12-07 17:36:08.626487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.511 [2024-12-07 17:36:08.626493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.511 [2024-12-07 17:36:08.626498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.511 [2024-12-07 17:36:08.626505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.511 [2024-12-07 17:36:08.626511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.511 [2024-12-07 17:36:08.626519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:19:35.511 [2024-12-07 17:36:08.626525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.636461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.511 [2024-12-07 17:36:08.636485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.511 [2024-12-07 17:36:08.636497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.908 ms 00:19:35.511 [2024-12-07 17:36:08.636504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.636819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.511 [2024-12-07 17:36:08.636832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.511 [2024-12-07 17:36:08.636841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:35.511 [2024-12-07 17:36:08.636846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.672943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.672967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.511 [2024-12-07 17:36:08.672977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.672990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.673073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.673082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.511 [2024-12-07 17:36:08.673090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.673096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.673153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.673161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.511 [2024-12-07 17:36:08.673172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.673178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.673205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.673212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.511 [2024-12-07 17:36:08.673219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.673226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.740576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.740612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.511 [2024-12-07 17:36:08.740623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.740631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.511 [2024-12-07 17:36:08.792240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.511 [2024-12-07 17:36:08.792439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.511 [2024-12-07 17:36:08.792519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.511 [2024-12-07 17:36:08.792638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.511 [2024-12-07 17:36:08.792710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.511 [2024-12-07 17:36:08.792785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.792843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.511 [2024-12-07 17:36:08.792851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.511 [2024-12-07 17:36:08.792859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.511 [2024-12-07 17:36:08.792866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.511 [2024-12-07 17:36:08.793047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.721 ms, result 0 00:19:35.511 true 00:19:35.511 17:36:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76311 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76311 ']' 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76311 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76311 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.511 killing process with pid 76311 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76311' 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76311 00:19:35.511 17:36:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76311 00:19:42.118 17:36:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:42.404 65536+0 records in 00:19:42.404 65536+0 records out 00:19:42.404 268435456 bytes (268 MB, 256 MiB) copied, 1.05811 s, 254 MB/s 00:19:42.404 17:36:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:42.698 [2024-12-07 17:36:15.777876] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:19:42.698 [2024-12-07 17:36:15.777969] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76494 ] 00:19:42.698 [2024-12-07 17:36:15.927890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.698 [2024-12-07 17:36:16.016653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.961 [2024-12-07 17:36:16.248973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.961 [2024-12-07 17:36:16.249039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.224 [2024-12-07 17:36:16.405658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.405702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.224 [2024-12-07 17:36:16.405716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.224 [2024-12-07 17:36:16.405724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.408378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.408414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.224 [2024-12-07 17:36:16.408424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:19:43.224 [2024-12-07 17:36:16.408432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.408504] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.224 [2024-12-07 17:36:16.409388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.224 [2024-12-07 17:36:16.409435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.409446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.224 [2024-12-07 17:36:16.409456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:19:43.224 [2024-12-07 17:36:16.409463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.410632] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.224 [2024-12-07 17:36:16.423221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.423255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.224 [2024-12-07 17:36:16.423267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.590 ms 00:19:43.224 [2024-12-07 17:36:16.423280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.423366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.423377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.224 [2024-12-07 17:36:16.423386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:43.224 [2024-12-07 17:36:16.423393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.428319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.428348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.224 [2024-12-07 17:36:16.428357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.885 ms 00:19:43.224 [2024-12-07 17:36:16.428364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.428447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.428457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.224 [2024-12-07 17:36:16.428465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:43.224 [2024-12-07 17:36:16.428472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.428498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.428506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.224 [2024-12-07 17:36:16.428515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.224 [2024-12-07 17:36:16.428522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.428543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:43.224 [2024-12-07 17:36:16.431840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.431868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.224 [2024-12-07 17:36:16.431877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:19:43.224 [2024-12-07 17:36:16.431884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.431921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.431930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.224 [2024-12-07 17:36:16.431937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:43.224 [2024-12-07 17:36:16.431945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.431964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.224 [2024-12-07 17:36:16.431992] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:43.224 [2024-12-07 17:36:16.432027] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.224 [2024-12-07 17:36:16.432042] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:43.224 [2024-12-07 17:36:16.432147] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.224 [2024-12-07 17:36:16.432158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.224 [2024-12-07 17:36:16.432168] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:43.224 [2024-12-07 17:36:16.432180] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.224 [2024-12-07 17:36:16.432188] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.224 [2024-12-07 17:36:16.432196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:43.224 [2024-12-07 17:36:16.432204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.224 [2024-12-07 17:36:16.432211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.224 [2024-12-07 17:36:16.432219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.224 [2024-12-07 17:36:16.432227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.224 [2024-12-07 17:36:16.432234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.224 [2024-12-07 17:36:16.432241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:43.224 [2024-12-07 17:36:16.432248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.224 [2024-12-07 17:36:16.432337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.432347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.225 [2024-12-07 17:36:16.432354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:43.225 [2024-12-07 17:36:16.432361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.225 [2024-12-07 17:36:16.432475] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.225 [2024-12-07 17:36:16.432492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.225 [2024-12-07 17:36:16.432500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.225 [2024-12-07 17:36:16.432522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.225 [2024-12-07 17:36:16.432543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.225 [2024-12-07 17:36:16.432556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.225 [2024-12-07 17:36:16.432569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:43.225 [2024-12-07 17:36:16.432575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.225 [2024-12-07 17:36:16.432581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.225 [2024-12-07 17:36:16.432588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:43.225 [2024-12-07 17:36:16.432594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.225 [2024-12-07 17:36:16.432607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.225 [2024-12-07 17:36:16.432626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.225 [2024-12-07 17:36:16.432644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.225 [2024-12-07 17:36:16.432663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.225 [2024-12-07 17:36:16.432682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.225 [2024-12-07 17:36:16.432701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.225 [2024-12-07 17:36:16.432713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.225 [2024-12-07 17:36:16.432719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:43.225 [2024-12-07 17:36:16.432725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.225 [2024-12-07 17:36:16.432731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.225 [2024-12-07 17:36:16.432738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:43.225 [2024-12-07 17:36:16.432746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.225 [2024-12-07 17:36:16.432759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:43.225 [2024-12-07 17:36:16.432765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.225 [2024-12-07 17:36:16.432779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.225 [2024-12-07 17:36:16.432788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.225 [2024-12-07 17:36:16.432803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.225 [2024-12-07 17:36:16.432810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.225 [2024-12-07 17:36:16.432816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.225 [2024-12-07 17:36:16.432822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.225 [2024-12-07 17:36:16.432828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.225 [2024-12-07 17:36:16.432834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.225 [2024-12-07 17:36:16.432843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.225 [2024-12-07 17:36:16.432851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:43.225 [2024-12-07 17:36:16.432866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:43.225 [2024-12-07 17:36:16.432873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:43.225 [2024-12-07 17:36:16.432880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:43.225 [2024-12-07 17:36:16.432887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:43.225 [2024-12-07 17:36:16.432893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:43.225 [2024-12-07 17:36:16.432900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:43.225 [2024-12-07 17:36:16.432907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:43.225 [2024-12-07 17:36:16.432914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:43.225 [2024-12-07 17:36:16.432920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:43.225 [2024-12-07 17:36:16.432954] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.225 [2024-12-07 17:36:16.432962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.225 [2024-12-07 17:36:16.432992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.225 [2024-12-07 17:36:16.433001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.225 [2024-12-07 17:36:16.433008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.225 [2024-12-07 17:36:16.433016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.433026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.225 [2024-12-07 17:36:16.433033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:19:43.225 [2024-12-07 17:36:16.433040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.225 [2024-12-07 17:36:16.459236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.459270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.225 [2024-12-07 17:36:16.459280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.141 ms 00:19:43.225 [2024-12-07 17:36:16.459288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.225 [2024-12-07 17:36:16.459403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.459412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:43.225 [2024-12-07 17:36:16.459420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:43.225 [2024-12-07 17:36:16.459428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.225 [2024-12-07 17:36:16.509826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.509867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.225 [2024-12-07 17:36:16.509882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.377 ms 00:19:43.225 [2024-12-07 17:36:16.509890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.225 [2024-12-07 17:36:16.509995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.225 [2024-12-07 17:36:16.510008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.225 [2024-12-07 17:36:16.510017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:43.225 [2024-12-07 17:36:16.510025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.510399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.510426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.226 [2024-12-07 17:36:16.510442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:19:43.226 [2024-12-07 17:36:16.510450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.510583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.510592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.226 [2024-12-07 17:36:16.510600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:43.226 [2024-12-07 17:36:16.510608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.524894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.524930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.226 [2024-12-07 17:36:16.524940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:19:43.226 [2024-12-07 17:36:16.524948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.538389] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:43.226 [2024-12-07 17:36:16.538432] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:43.226 [2024-12-07 17:36:16.538445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.538454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:43.226 [2024-12-07 17:36:16.538465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:19:43.226 [2024-12-07 17:36:16.538472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.563278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.563325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:43.226 [2024-12-07 17:36:16.563344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.720 ms 00:19:43.226 [2024-12-07 17:36:16.563353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.575920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.575963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:43.226 [2024-12-07 17:36:16.575976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.479 ms 00:19:43.226 [2024-12-07 17:36:16.575993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.588415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.588460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:43.226 [2024-12-07 17:36:16.588472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.336 ms 00:19:43.226 [2024-12-07 17:36:16.588480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.226 [2024-12-07 17:36:16.589146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.226 [2024-12-07 17:36:16.589177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:43.226 [2024-12-07 17:36:16.589188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:43.226 [2024-12-07 17:36:16.589197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.487 [2024-12-07 17:36:16.655365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.487 [2024-12-07 17:36:16.655428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.487 [2024-12-07 17:36:16.655444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.140 ms 00:19:43.487 [2024-12-07 17:36:16.655454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.487 [2024-12-07 17:36:16.666583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:43.487 [2024-12-07 17:36:16.685150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.487 [2024-12-07 17:36:16.685197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.487 [2024-12-07 17:36:16.685211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.598 ms 00:19:43.487 [2024-12-07 17:36:16.685220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.487 [2024-12-07 17:36:16.685318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.487 [2024-12-07 17:36:16.685329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.487 [2024-12-07 17:36:16.685339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:43.487 [2024-12-07 17:36:16.685347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.487 [2024-12-07 17:36:16.685405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.487 [2024-12-07 17:36:16.685417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.487 [2024-12-07 17:36:16.685426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:43.488 [2024-12-07 17:36:16.685436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.488 [2024-12-07 17:36:16.685470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.488 [2024-12-07 17:36:16.685481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.488 [2024-12-07 17:36:16.685490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:43.488 [2024-12-07 17:36:16.685498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.488 [2024-12-07 17:36:16.685537] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.488 [2024-12-07 17:36:16.685548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.488 [2024-12-07 17:36:16.685574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.488 [2024-12-07 17:36:16.685585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:43.488 [2024-12-07 17:36:16.685593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.488 [2024-12-07 17:36:16.711401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.488 [2024-12-07 17:36:16.711451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.488 [2024-12-07 17:36:16.711464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:19:43.488 [2024-12-07 17:36:16.711473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.488 [2024-12-07 17:36:16.711580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.488 [2024-12-07 17:36:16.711591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.488 [2024-12-07 17:36:16.711602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:43.488 [2024-12-07 17:36:16.711620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.488 [2024-12-07 17:36:16.713463] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.488 [2024-12-07 17:36:16.716927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.461 ms, result 0 00:19:43.488 [2024-12-07 17:36:16.718039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.488 [2024-12-07 17:36:16.731720] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.432  [2024-12-07T17:36:18.760Z] Copying: 16/256 [MB] (16 MBps) [2024-12-07T17:36:20.148Z] Copying: 30/256 [MB] (13 MBps) [2024-12-07T17:36:21.090Z] Copying: 44/256 [MB] (14 MBps) [2024-12-07T17:36:22.033Z] Copying: 58/256 [MB] (14 MBps) [2024-12-07T17:36:22.970Z] Copying: 68/256 [MB] (10 MBps) [2024-12-07T17:36:23.911Z] Copying: 84/256 [MB] (16 MBps) [2024-12-07T17:36:24.851Z] Copying: 105/256 [MB] (20 MBps) [2024-12-07T17:36:25.794Z] Copying: 119/256 [MB] (14 MBps) [2024-12-07T17:36:27.175Z] Copying: 132/256 [MB] (12 MBps) [2024-12-07T17:36:27.744Z] Copying: 148/256 [MB] (16 MBps) [2024-12-07T17:36:29.126Z] Copying: 164/256 [MB] (15 MBps) [2024-12-07T17:36:30.066Z] Copying: 180/256 [MB] (15 MBps) [2024-12-07T17:36:31.008Z] Copying: 196/256 [MB] (16 MBps) [2024-12-07T17:36:31.947Z] Copying: 209/256 [MB] (12 MBps) [2024-12-07T17:36:32.889Z] Copying: 225/256 [MB] (16 MBps) [2024-12-07T17:36:33.832Z] Copying: 240/256 [MB] (14 MBps) [2024-12-07T17:36:34.095Z] Copying: 253/256 [MB] (12 MBps) [2024-12-07T17:36:34.095Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-07 17:36:33.901594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.713 [2024-12-07 17:36:33.911784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.911834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:00.713 [2024-12-07 17:36:33.911850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.713 [2024-12-07 17:36:33.911867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.911892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:00.713 [2024-12-07 17:36:33.914881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.914917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:00.713 [2024-12-07 17:36:33.914929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:20:00.713 [2024-12-07 17:36:33.914938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.918094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.918137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:00.713 [2024-12-07 17:36:33.918148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:20:00.713 [2024-12-07 17:36:33.918156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.926712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.926764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:00.713 [2024-12-07 17:36:33.926775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.536 ms 00:20:00.713 [2024-12-07 17:36:33.926783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.933707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.933746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:00.713 [2024-12-07 17:36:33.933756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:20:00.713 [2024-12-07 17:36:33.933764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.958822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.958869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:00.713 [2024-12-07 17:36:33.958881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.996 ms 00:20:00.713 [2024-12-07 17:36:33.958888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.975121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.975171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:00.713 [2024-12-07 17:36:33.975187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.183 ms 00:20:00.713 [2024-12-07 17:36:33.975195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:33.975348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:33.975360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:00.713 [2024-12-07 17:36:33.975370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:00.713 [2024-12-07 17:36:33.975387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:34.000731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:34.000774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:00.713 [2024-12-07 17:36:34.000786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.328 ms 00:20:00.713 [2024-12-07 17:36:34.000794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:34.025559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:34.025605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:00.713 [2024-12-07 17:36:34.025616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.719 ms 00:20:00.713 [2024-12-07 17:36:34.025623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:34.049910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:34.049952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:00.713 [2024-12-07 17:36:34.049964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.238 ms 00:20:00.713 [2024-12-07 17:36:34.049971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.713 [2024-12-07 17:36:34.074394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.713 [2024-12-07 17:36:34.074451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:00.713 [2024-12-07 17:36:34.074462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.331 ms 00:20:00.714 [2024-12-07 17:36:34.074469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.714 [2024-12-07 17:36:34.074514] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:00.714 [2024-12-07 17:36:34.074530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.074972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:00.714 [2024-12-07 17:36:34.075225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:00.715 [2024-12-07 17:36:34.075351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:00.715 [2024-12-07 17:36:34.075359] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:00.715 [2024-12-07 17:36:34.075368] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:00.715 [2024-12-07 17:36:34.075376] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:00.715 [2024-12-07 17:36:34.075383] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:00.715 [2024-12-07 17:36:34.075392] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:00.715 [2024-12-07 17:36:34.075399] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:00.715 [2024-12-07 17:36:34.075407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:00.715 [2024-12-07 17:36:34.075415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:00.715 [2024-12-07 17:36:34.075422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:00.715 [2024-12-07 17:36:34.075428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:00.715 [2024-12-07 17:36:34.075435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.715 [2024-12-07 17:36:34.075446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:00.715 [2024-12-07 17:36:34.075455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:20:00.715 [2024-12-07 17:36:34.075463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.715 [2024-12-07 17:36:34.088727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.715 [2024-12-07 17:36:34.088765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:00.715 [2024-12-07 17:36:34.088777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.232 ms 00:20:00.715 [2024-12-07 17:36:34.088785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.715 [2024-12-07 17:36:34.089207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.715 [2024-12-07 17:36:34.089224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:00.715 [2024-12-07 17:36:34.089233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:20:00.715 [2024-12-07 17:36:34.089241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.127780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.127830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.976 [2024-12-07 17:36:34.127841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.976 [2024-12-07 17:36:34.127849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.127957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.127967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.976 [2024-12-07 17:36:34.127976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.976 [2024-12-07 17:36:34.128007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.128061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.128071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.976 [2024-12-07 17:36:34.128080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.976 [2024-12-07 17:36:34.128087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.128105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.128116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.976 [2024-12-07 17:36:34.128124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.976 [2024-12-07 17:36:34.128131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.211495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.211552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.976 [2024-12-07 17:36:34.211565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.976 [2024-12-07 17:36:34.211574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.976 [2024-12-07 17:36:34.279604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.976 [2024-12-07 17:36:34.279660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.976 [2024-12-07 17:36:34.279674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.279683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.279758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.279768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.977 [2024-12-07 17:36:34.279777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.279786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.279820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.279829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.977 [2024-12-07 17:36:34.279844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.279852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.279950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.279961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.977 [2024-12-07 17:36:34.279970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.279978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.280030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.280039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.977 [2024-12-07 17:36:34.280048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.280060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.280104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.280113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.977 [2024-12-07 17:36:34.280122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.280130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.280178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.977 [2024-12-07 17:36:34.280189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.977 [2024-12-07 17:36:34.280200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.977 [2024-12-07 17:36:34.280209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.977 [2024-12-07 17:36:34.280372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.572 ms, result 0 00:20:01.929 00:20:01.929 00:20:01.929 17:36:35 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76696 00:20:01.929 17:36:35 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76696 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76696 ']' 00:20:01.929 17:36:35 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.929 17:36:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:01.929 [2024-12-07 17:36:35.247432] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:01.929 [2024-12-07 17:36:35.247585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76696 ] 00:20:02.190 [2024-12-07 17:36:35.412200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.190 [2024-12-07 17:36:35.531640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.132 17:36:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.132 17:36:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:03.132 17:36:36 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:03.132 [2024-12-07 17:36:36.468304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.132 [2024-12-07 17:36:36.468387] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.396 [2024-12-07 17:36:36.647385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.647442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:03.396 [2024-12-07 17:36:36.647459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.396 [2024-12-07 17:36:36.647469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.650453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.650502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.396 [2024-12-07 17:36:36.650514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.962 ms 00:20:03.396 [2024-12-07 17:36:36.650523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.650641] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:03.396 [2024-12-07 17:36:36.651366] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:03.396 [2024-12-07 17:36:36.651393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.651403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.396 [2024-12-07 17:36:36.651414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:20:03.396 [2024-12-07 17:36:36.651422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.653195] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:03.396 [2024-12-07 17:36:36.667413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.667468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:03.396 [2024-12-07 17:36:36.667482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.222 ms 00:20:03.396 [2024-12-07 17:36:36.667492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.667603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.667617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:03.396 [2024-12-07 17:36:36.667627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:03.396 [2024-12-07 17:36:36.667638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.675547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.675599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.396 [2024-12-07 17:36:36.675610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.855 ms 00:20:03.396 [2024-12-07 17:36:36.675620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.675735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.675748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.396 [2024-12-07 17:36:36.675759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:03.396 [2024-12-07 17:36:36.675773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.675798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.675810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:03.396 [2024-12-07 17:36:36.675818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:03.396 [2024-12-07 17:36:36.675828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.675850] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:03.396 [2024-12-07 17:36:36.679849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.679891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.396 [2024-12-07 17:36:36.679904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:20:03.396 [2024-12-07 17:36:36.679913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.680003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.396 [2024-12-07 17:36:36.680014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:03.396 [2024-12-07 17:36:36.680026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:03.396 [2024-12-07 17:36:36.680036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.396 [2024-12-07 17:36:36.680060] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:03.396 [2024-12-07 17:36:36.680085] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:03.396 [2024-12-07 17:36:36.680132] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:03.396 [2024-12-07 17:36:36.680149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:03.396 [2024-12-07 17:36:36.680260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:03.396 [2024-12-07 17:36:36.680272] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:03.396 [2024-12-07 17:36:36.680290] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:03.396 [2024-12-07 17:36:36.680301] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:03.396 [2024-12-07 17:36:36.680314] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:03.396 [2024-12-07 17:36:36.680323] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:03.397 [2024-12-07 17:36:36.680333] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:03.397 [2024-12-07 17:36:36.680340] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:03.397 [2024-12-07 17:36:36.680352] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:03.397 [2024-12-07 17:36:36.680360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.397 [2024-12-07 17:36:36.680369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:03.397 [2024-12-07 17:36:36.680376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:03.397 [2024-12-07 17:36:36.680385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.397 [2024-12-07 17:36:36.680476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.397 [2024-12-07 17:36:36.680486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:03.397 [2024-12-07 17:36:36.680493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:03.397 [2024-12-07 17:36:36.680501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.397 [2024-12-07 17:36:36.680601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:03.397 [2024-12-07 17:36:36.680611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:03.397 [2024-12-07 17:36:36.680619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:03.397 [2024-12-07 17:36:36.680647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:03.397 [2024-12-07 17:36:36.680673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.397 [2024-12-07 17:36:36.680689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:03.397 [2024-12-07 17:36:36.680697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:03.397 [2024-12-07 17:36:36.680704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.397 [2024-12-07 17:36:36.680717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:03.397 [2024-12-07 17:36:36.680724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:03.397 [2024-12-07 17:36:36.680733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:03.397 [2024-12-07 17:36:36.680749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:03.397 [2024-12-07 17:36:36.680778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:03.397 [2024-12-07 17:36:36.680804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:03.397 [2024-12-07 17:36:36.680832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:03.397 [2024-12-07 17:36:36.680857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.397 [2024-12-07 17:36:36.680871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:03.397 [2024-12-07 17:36:36.680879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.397 [2024-12-07 17:36:36.680895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:03.397 [2024-12-07 17:36:36.680904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:03.397 [2024-12-07 17:36:36.680911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.397 [2024-12-07 17:36:36.680921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:03.397 [2024-12-07 17:36:36.680931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:03.397 [2024-12-07 17:36:36.680942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:03.397 [2024-12-07 17:36:36.680958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:03.397 [2024-12-07 17:36:36.680966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.680975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:03.397 [2024-12-07 17:36:36.681001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:03.397 [2024-12-07 17:36:36.681010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.397 [2024-12-07 17:36:36.681018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.397 [2024-12-07 17:36:36.681029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:03.397 [2024-12-07 17:36:36.681036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:03.397 [2024-12-07 17:36:36.681045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:03.397 [2024-12-07 17:36:36.681054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:03.397 [2024-12-07 17:36:36.681063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:03.397 [2024-12-07 17:36:36.681070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:03.397 [2024-12-07 17:36:36.681081] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:03.397 [2024-12-07 17:36:36.681090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:03.397 [2024-12-07 17:36:36.681112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:03.397 [2024-12-07 17:36:36.681121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:03.397 [2024-12-07 17:36:36.681129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:03.397 [2024-12-07 17:36:36.681138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:03.397 [2024-12-07 17:36:36.681145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:03.397 [2024-12-07 17:36:36.681153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:03.397 [2024-12-07 17:36:36.681160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:03.397 [2024-12-07 17:36:36.681169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:03.397 [2024-12-07 17:36:36.681177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:03.397 [2024-12-07 17:36:36.681223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:03.397 [2024-12-07 17:36:36.681233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:03.397 [2024-12-07 17:36:36.681255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:03.397 [2024-12-07 17:36:36.681267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:03.397 [2024-12-07 17:36:36.681275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:03.397 [2024-12-07 17:36:36.681286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.397 [2024-12-07 17:36:36.681294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:03.398 [2024-12-07 17:36:36.681305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:20:03.398 [2024-12-07 17:36:36.681315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.713865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.713914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.398 [2024-12-07 17:36:36.713930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.488 ms 00:20:03.398 [2024-12-07 17:36:36.713942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.714094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.714110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:03.398 [2024-12-07 17:36:36.714122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:03.398 [2024-12-07 17:36:36.714130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.749427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.749474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.398 [2024-12-07 17:36:36.749489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.271 ms 00:20:03.398 [2024-12-07 17:36:36.749497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.749612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.749623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.398 [2024-12-07 17:36:36.749635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:03.398 [2024-12-07 17:36:36.749643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.750226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.750260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.398 [2024-12-07 17:36:36.750273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:03.398 [2024-12-07 17:36:36.750281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.750430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.750438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.398 [2024-12-07 17:36:36.750451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:20:03.398 [2024-12-07 17:36:36.750459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.398 [2024-12-07 17:36:36.768366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.398 [2024-12-07 17:36:36.768408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.398 [2024-12-07 17:36:36.768421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.880 ms 00:20:03.398 [2024-12-07 17:36:36.768430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.660 [2024-12-07 17:36:36.795455] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:03.660 [2024-12-07 17:36:36.795516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:03.660 [2024-12-07 17:36:36.795537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.660 [2024-12-07 17:36:36.795549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:03.660 [2024-12-07 17:36:36.795565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.989 ms 00:20:03.660 [2024-12-07 17:36:36.795583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.660 [2024-12-07 17:36:36.825727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.660 [2024-12-07 17:36:36.825775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:03.660 [2024-12-07 17:36:36.825792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.023 ms 00:20:03.661 [2024-12-07 17:36:36.825802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.838616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.838661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:03.661 [2024-12-07 17:36:36.838679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.712 ms 00:20:03.661 [2024-12-07 17:36:36.838688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.851425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.851469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:03.661 [2024-12-07 17:36:36.851483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.647 ms 00:20:03.661 [2024-12-07 17:36:36.851491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.852228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.852256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:03.661 [2024-12-07 17:36:36.852268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:20:03.661 [2024-12-07 17:36:36.852275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.918169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.918233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:03.661 [2024-12-07 17:36:36.918249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.863 ms 00:20:03.661 [2024-12-07 17:36:36.918258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.929634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:03.661 [2024-12-07 17:36:36.948504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.948565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:03.661 [2024-12-07 17:36:36.948581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.149 ms 00:20:03.661 [2024-12-07 17:36:36.948592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.948681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.948694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:03.661 [2024-12-07 17:36:36.948704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:03.661 [2024-12-07 17:36:36.948714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.948771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.948784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:03.661 [2024-12-07 17:36:36.948793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:03.661 [2024-12-07 17:36:36.948807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.948834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.948844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:03.661 [2024-12-07 17:36:36.948853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:03.661 [2024-12-07 17:36:36.948865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.948902] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:03.661 [2024-12-07 17:36:36.948917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.948929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:03.661 [2024-12-07 17:36:36.948940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:03.661 [2024-12-07 17:36:36.948947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.974841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.974894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:03.661 [2024-12-07 17:36:36.974910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.861 ms 00:20:03.661 [2024-12-07 17:36:36.974920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.975045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.661 [2024-12-07 17:36:36.975059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:03.661 [2024-12-07 17:36:36.975071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:03.661 [2024-12-07 17:36:36.975082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.661 [2024-12-07 17:36:36.976961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.661 [2024-12-07 17:36:36.980394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.243 ms, result 0 00:20:03.661 [2024-12-07 17:36:36.982603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.661 Some configs were skipped because the RPC state that can call them passed over. 00:20:03.661 17:36:37 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:03.923 [2024-12-07 17:36:37.231282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.923 [2024-12-07 17:36:37.231350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:03.923 [2024-12-07 17:36:37.231363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:20:03.923 [2024-12-07 17:36:37.231374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.923 [2024-12-07 17:36:37.231409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.303 ms, result 0 00:20:03.923 true 00:20:03.923 17:36:37 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:04.184 [2024-12-07 17:36:37.447068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.184 [2024-12-07 17:36:37.447120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:04.184 [2024-12-07 17:36:37.447133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:20:04.184 [2024-12-07 17:36:37.447141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.184 [2024-12-07 17:36:37.447179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.802 ms, result 0 00:20:04.184 true 00:20:04.184 17:36:37 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76696 00:20:04.184 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76696 ']' 00:20:04.184 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76696 00:20:04.184 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76696 00:20:04.185 killing process with pid 76696 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76696' 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76696 00:20:04.185 17:36:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76696 00:20:05.129 [2024-12-07 17:36:38.228335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-12-07 17:36:38.228399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.129 [2024-12-07 17:36:38.228411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:05.129 [2024-12-07 17:36:38.228420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-12-07 17:36:38.228441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:05.129 [2024-12-07 17:36:38.230784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.129 [2024-12-07 17:36:38.230817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.129 [2024-12-07 17:36:38.230831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:20:05.129 [2024-12-07 17:36:38.230837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.129 [2024-12-07 17:36:38.231065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.231076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.130 [2024-12-07 17:36:38.231085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:05.130 [2024-12-07 17:36:38.231093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.234548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.234579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.130 [2024-12-07 17:36:38.234591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:20:05.130 [2024-12-07 17:36:38.234597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.239843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.239887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:05.130 [2024-12-07 17:36:38.239901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:20:05.130 [2024-12-07 17:36:38.239907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.248402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.248442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.130 [2024-12-07 17:36:38.248453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.447 ms 00:20:05.130 [2024-12-07 17:36:38.248459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.255630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.255663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.130 [2024-12-07 17:36:38.255673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.135 ms 00:20:05.130 [2024-12-07 17:36:38.255679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.255806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.255815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.130 [2024-12-07 17:36:38.255824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:05.130 [2024-12-07 17:36:38.255830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.264654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.264682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:05.130 [2024-12-07 17:36:38.264691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:20:05.130 [2024-12-07 17:36:38.264697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.273026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.273052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:05.130 [2024-12-07 17:36:38.273064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.297 ms 00:20:05.130 [2024-12-07 17:36:38.273069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.280731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.280760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.130 [2024-12-07 17:36:38.280769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.630 ms 00:20:05.130 [2024-12-07 17:36:38.280775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.288464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.130 [2024-12-07 17:36:38.288492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.130 [2024-12-07 17:36:38.288500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.640 ms 00:20:05.130 [2024-12-07 17:36:38.288506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.130 [2024-12-07 17:36:38.288544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.130 [2024-12-07 17:36:38.288555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.130 [2024-12-07 17:36:38.288835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.288994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.131 [2024-12-07 17:36:38.289222] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.131 [2024-12-07 17:36:38.289232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:05.131 [2024-12-07 17:36:38.289240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:05.131 [2024-12-07 17:36:38.289248] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:05.131 [2024-12-07 17:36:38.289253] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:05.131 [2024-12-07 17:36:38.289260] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:05.131 [2024-12-07 17:36:38.289266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.131 [2024-12-07 17:36:38.289273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.131 [2024-12-07 17:36:38.289279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.131 [2024-12-07 17:36:38.289285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.131 [2024-12-07 17:36:38.289290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.131 [2024-12-07 17:36:38.289298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.131 [2024-12-07 17:36:38.289304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.131 [2024-12-07 17:36:38.289313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:20:05.131 [2024-12-07 17:36:38.289318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.131 [2024-12-07 17:36:38.299081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.131 [2024-12-07 17:36:38.299107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.132 [2024-12-07 17:36:38.299117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.744 ms 00:20:05.132 [2024-12-07 17:36:38.299123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.299412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.132 [2024-12-07 17:36:38.299426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.132 [2024-12-07 17:36:38.299437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:05.132 [2024-12-07 17:36:38.299442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.334279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.334303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.132 [2024-12-07 17:36:38.334312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.334319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.334390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.334398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.132 [2024-12-07 17:36:38.334408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.334414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.334450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.334457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.132 [2024-12-07 17:36:38.334465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.334471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.334485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.334491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.132 [2024-12-07 17:36:38.334499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.334506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.394289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.394320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.132 [2024-12-07 17:36:38.394330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.394336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.132 [2024-12-07 17:36:38.443193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.132 [2024-12-07 17:36:38.443273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.132 [2024-12-07 17:36:38.443314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.132 [2024-12-07 17:36:38.443401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.132 [2024-12-07 17:36:38.443446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.132 [2024-12-07 17:36:38.443498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.132 [2024-12-07 17:36:38.443545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.132 [2024-12-07 17:36:38.443553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.132 [2024-12-07 17:36:38.443558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.132 [2024-12-07 17:36:38.443658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.308 ms, result 0 00:20:05.701 17:36:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:05.701 17:36:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:05.701 [2024-12-07 17:36:39.035951] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:05.701 [2024-12-07 17:36:39.036092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76749 ] 00:20:05.961 [2024-12-07 17:36:39.196025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.961 [2024-12-07 17:36:39.281341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.219 [2024-12-07 17:36:39.490139] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.219 [2024-12-07 17:36:39.490192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.479 [2024-12-07 17:36:39.641924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.641958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.479 [2024-12-07 17:36:39.641969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.479 [2024-12-07 17:36:39.641976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.644040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.644069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.479 [2024-12-07 17:36:39.644076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:20:06.479 [2024-12-07 17:36:39.644082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.644211] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.479 [2024-12-07 17:36:39.644757] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.479 [2024-12-07 17:36:39.644773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.644779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.479 [2024-12-07 17:36:39.644786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:06.479 [2024-12-07 17:36:39.644792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.645906] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.479 [2024-12-07 17:36:39.655383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.655410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.479 [2024-12-07 17:36:39.655418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.477 ms 00:20:06.479 [2024-12-07 17:36:39.655424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.655493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.655502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.479 [2024-12-07 17:36:39.655509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:06.479 [2024-12-07 17:36:39.655514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.659766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.479 [2024-12-07 17:36:39.659791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.479 [2024-12-07 17:36:39.659798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:20:06.479 [2024-12-07 17:36:39.659804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.479 [2024-12-07 17:36:39.659873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.659881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.480 [2024-12-07 17:36:39.659887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:06.480 [2024-12-07 17:36:39.659893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.659910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.659916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.480 [2024-12-07 17:36:39.659922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.480 [2024-12-07 17:36:39.659927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.659944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:06.480 [2024-12-07 17:36:39.662605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.662628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.480 [2024-12-07 17:36:39.662636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:20:06.480 [2024-12-07 17:36:39.662641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.662669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.662676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.480 [2024-12-07 17:36:39.662682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:06.480 [2024-12-07 17:36:39.662688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.662702] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.480 [2024-12-07 17:36:39.662716] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.480 [2024-12-07 17:36:39.662744] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.480 [2024-12-07 17:36:39.662755] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.480 [2024-12-07 17:36:39.662833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.480 [2024-12-07 17:36:39.662841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.480 [2024-12-07 17:36:39.662849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.480 [2024-12-07 17:36:39.662859] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.480 [2024-12-07 17:36:39.662865] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.480 [2024-12-07 17:36:39.662871] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:06.480 [2024-12-07 17:36:39.662877] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.480 [2024-12-07 17:36:39.662882] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.480 [2024-12-07 17:36:39.662888] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.480 [2024-12-07 17:36:39.662894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.662899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.480 [2024-12-07 17:36:39.662908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:06.480 [2024-12-07 17:36:39.662914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.662989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.480 [2024-12-07 17:36:39.662997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.480 [2024-12-07 17:36:39.663003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:06.480 [2024-12-07 17:36:39.663009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.480 [2024-12-07 17:36:39.663085] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.480 [2024-12-07 17:36:39.663098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.480 [2024-12-07 17:36:39.663104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.480 [2024-12-07 17:36:39.663123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.480 [2024-12-07 17:36:39.663139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.480 [2024-12-07 17:36:39.663150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.480 [2024-12-07 17:36:39.663159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:06.480 [2024-12-07 17:36:39.663164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.480 [2024-12-07 17:36:39.663169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.480 [2024-12-07 17:36:39.663174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:06.480 [2024-12-07 17:36:39.663178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.480 [2024-12-07 17:36:39.663188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.480 [2024-12-07 17:36:39.663203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.480 [2024-12-07 17:36:39.663218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.480 [2024-12-07 17:36:39.663232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.480 [2024-12-07 17:36:39.663246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.480 [2024-12-07 17:36:39.663261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.480 [2024-12-07 17:36:39.663270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.480 [2024-12-07 17:36:39.663274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:06.480 [2024-12-07 17:36:39.663279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.480 [2024-12-07 17:36:39.663285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.480 [2024-12-07 17:36:39.663290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:06.480 [2024-12-07 17:36:39.663295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.480 [2024-12-07 17:36:39.663305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:06.480 [2024-12-07 17:36:39.663310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.480 [2024-12-07 17:36:39.663321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.480 [2024-12-07 17:36:39.663327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.480 [2024-12-07 17:36:39.663339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.480 [2024-12-07 17:36:39.663344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.480 [2024-12-07 17:36:39.663349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.480 [2024-12-07 17:36:39.663354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.480 [2024-12-07 17:36:39.663359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.480 [2024-12-07 17:36:39.663364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.480 [2024-12-07 17:36:39.663370] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.480 [2024-12-07 17:36:39.663377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.480 [2024-12-07 17:36:39.663383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:06.480 [2024-12-07 17:36:39.663389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:06.480 [2024-12-07 17:36:39.663394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:06.480 [2024-12-07 17:36:39.663399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:06.480 [2024-12-07 17:36:39.663404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:06.480 [2024-12-07 17:36:39.663409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:06.480 [2024-12-07 17:36:39.663414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:06.480 [2024-12-07 17:36:39.663419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:06.481 [2024-12-07 17:36:39.663424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:06.481 [2024-12-07 17:36:39.663430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:06.481 [2024-12-07 17:36:39.663456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.481 [2024-12-07 17:36:39.663462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.481 [2024-12-07 17:36:39.663474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.481 [2024-12-07 17:36:39.663479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.481 [2024-12-07 17:36:39.663484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.481 [2024-12-07 17:36:39.663490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.663498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.481 [2024-12-07 17:36:39.663504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:06.481 [2024-12-07 17:36:39.663509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.684071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.684097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.481 [2024-12-07 17:36:39.684105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.522 ms 00:20:06.481 [2024-12-07 17:36:39.684111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.684204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.684216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.481 [2024-12-07 17:36:39.684223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:06.481 [2024-12-07 17:36:39.684228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.725428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.725460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.481 [2024-12-07 17:36:39.725472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.183 ms 00:20:06.481 [2024-12-07 17:36:39.725478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.725537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.725559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.481 [2024-12-07 17:36:39.725567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.481 [2024-12-07 17:36:39.725573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.725855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.725880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.481 [2024-12-07 17:36:39.725887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:06.481 [2024-12-07 17:36:39.725896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.726006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.726014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.481 [2024-12-07 17:36:39.726020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:06.481 [2024-12-07 17:36:39.726025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.736683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.736711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.481 [2024-12-07 17:36:39.736719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.642 ms 00:20:06.481 [2024-12-07 17:36:39.736725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.746482] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:06.481 [2024-12-07 17:36:39.746510] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.481 [2024-12-07 17:36:39.746519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.746526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.481 [2024-12-07 17:36:39.746533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.710 ms 00:20:06.481 [2024-12-07 17:36:39.746539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.764824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.764851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.481 [2024-12-07 17:36:39.764860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.239 ms 00:20:06.481 [2024-12-07 17:36:39.764867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.773687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.773713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.481 [2024-12-07 17:36:39.773720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.769 ms 00:20:06.481 [2024-12-07 17:36:39.773726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.782154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.782180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.481 [2024-12-07 17:36:39.782187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.388 ms 00:20:06.481 [2024-12-07 17:36:39.782192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.782648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.782664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.481 [2024-12-07 17:36:39.782671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:20:06.481 [2024-12-07 17:36:39.782677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.826084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.826122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.481 [2024-12-07 17:36:39.826133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.390 ms 00:20:06.481 [2024-12-07 17:36:39.826139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.834096] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.481 [2024-12-07 17:36:39.845453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.845483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.481 [2024-12-07 17:36:39.845493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:20:06.481 [2024-12-07 17:36:39.845503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.845586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.845595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.481 [2024-12-07 17:36:39.845602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:06.481 [2024-12-07 17:36:39.845608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.845644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.845651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.481 [2024-12-07 17:36:39.845657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:06.481 [2024-12-07 17:36:39.845665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.845689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.845695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.481 [2024-12-07 17:36:39.845701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.481 [2024-12-07 17:36:39.845707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.481 [2024-12-07 17:36:39.845730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.481 [2024-12-07 17:36:39.845737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.481 [2024-12-07 17:36:39.845743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.481 [2024-12-07 17:36:39.845750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:06.481 [2024-12-07 17:36:39.845755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.742 [2024-12-07 17:36:39.863631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.742 [2024-12-07 17:36:39.863657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.742 [2024-12-07 17:36:39.863665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:20:06.742 [2024-12-07 17:36:39.863672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.742 [2024-12-07 17:36:39.863737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.742 [2024-12-07 17:36:39.863746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.742 [2024-12-07 17:36:39.863753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:06.742 [2024-12-07 17:36:39.863759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.742 [2024-12-07 17:36:39.865704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.742 [2024-12-07 17:36:39.875216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.882 ms, result 0 00:20:06.742 [2024-12-07 17:36:39.876729] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.742 [2024-12-07 17:36:39.890546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.686  [2024-12-07T17:36:42.011Z] Copying: 16/256 [MB] (16 MBps) [2024-12-07T17:36:42.954Z] Copying: 30/256 [MB] (14 MBps) [2024-12-07T17:36:43.896Z] Copying: 40/256 [MB] (10 MBps) [2024-12-07T17:36:45.288Z] Copying: 55/256 [MB] (15 MBps) [2024-12-07T17:36:46.228Z] Copying: 79/256 [MB] (24 MBps) [2024-12-07T17:36:47.168Z] Copying: 100/256 [MB] (20 MBps) [2024-12-07T17:36:48.165Z] Copying: 114/256 [MB] (13 MBps) [2024-12-07T17:36:49.104Z] Copying: 135/256 [MB] (21 MBps) [2024-12-07T17:36:50.044Z] Copying: 153/256 [MB] (18 MBps) [2024-12-07T17:36:50.985Z] Copying: 168/256 [MB] (15 MBps) [2024-12-07T17:36:51.928Z] Copying: 188/256 [MB] (19 MBps) [2024-12-07T17:36:53.309Z] Copying: 202/256 [MB] (14 MBps) [2024-12-07T17:36:54.251Z] Copying: 221/256 [MB] (19 MBps) [2024-12-07T17:36:54.824Z] Copying: 242/256 [MB] (20 MBps) [2024-12-07T17:36:54.824Z] Copying: 256/256 [MB] (average 17 MBps)[2024-12-07 17:36:54.572525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.442 [2024-12-07 17:36:54.582822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.582871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.442 [2024-12-07 17:36:54.582893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.442 [2024-12-07 17:36:54.582902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.582926] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.442 [2024-12-07 17:36:54.585926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.585965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.442 [2024-12-07 17:36:54.585977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.986 ms 00:20:21.442 [2024-12-07 17:36:54.585995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.586261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.586273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.442 [2024-12-07 17:36:54.586283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:21.442 [2024-12-07 17:36:54.586292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.590006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.590031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.442 [2024-12-07 17:36:54.590041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:20:21.442 [2024-12-07 17:36:54.590049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.597118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.597155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.442 [2024-12-07 17:36:54.597167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.051 ms 00:20:21.442 [2024-12-07 17:36:54.597175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.622744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.622791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.442 [2024-12-07 17:36:54.622804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.504 ms 00:20:21.442 [2024-12-07 17:36:54.622812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.639230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.639278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.442 [2024-12-07 17:36:54.639297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.366 ms 00:20:21.442 [2024-12-07 17:36:54.639305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.639461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.639474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.442 [2024-12-07 17:36:54.639492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:21.442 [2024-12-07 17:36:54.639500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.442 [2024-12-07 17:36:54.665628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.442 [2024-12-07 17:36:54.665674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.442 [2024-12-07 17:36:54.665686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.111 ms 00:20:21.442 [2024-12-07 17:36:54.665693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.443 [2024-12-07 17:36:54.691293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.443 [2024-12-07 17:36:54.691338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.443 [2024-12-07 17:36:54.691351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.541 ms 00:20:21.443 [2024-12-07 17:36:54.691358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.443 [2024-12-07 17:36:54.716466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.443 [2024-12-07 17:36:54.716512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.443 [2024-12-07 17:36:54.716524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.047 ms 00:20:21.443 [2024-12-07 17:36:54.716531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.443 [2024-12-07 17:36:54.741258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.443 [2024-12-07 17:36:54.741305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.443 [2024-12-07 17:36:54.741316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.636 ms 00:20:21.443 [2024-12-07 17:36:54.741323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.443 [2024-12-07 17:36:54.741370] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.443 [2024-12-07 17:36:54.741386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.741996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.742004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.742011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.742018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.443 [2024-12-07 17:36:54.742026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.444 [2024-12-07 17:36:54.742211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.444 [2024-12-07 17:36:54.742218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:21.444 [2024-12-07 17:36:54.742227] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.444 [2024-12-07 17:36:54.742243] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.444 [2024-12-07 17:36:54.742251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.444 [2024-12-07 17:36:54.742259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.444 [2024-12-07 17:36:54.742266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.444 [2024-12-07 17:36:54.742274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.444 [2024-12-07 17:36:54.742286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.444 [2024-12-07 17:36:54.742293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.444 [2024-12-07 17:36:54.742299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.444 [2024-12-07 17:36:54.742307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.444 [2024-12-07 17:36:54.742315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.444 [2024-12-07 17:36:54.742324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:20:21.444 [2024-12-07 17:36:54.742333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.755803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.444 [2024-12-07 17:36:54.755844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.444 [2024-12-07 17:36:54.755855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:20:21.444 [2024-12-07 17:36:54.755863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.756305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.444 [2024-12-07 17:36:54.756327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.444 [2024-12-07 17:36:54.756336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:20:21.444 [2024-12-07 17:36:54.756344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.795190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.444 [2024-12-07 17:36:54.795238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.444 [2024-12-07 17:36:54.795250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.444 [2024-12-07 17:36:54.795264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.795351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.444 [2024-12-07 17:36:54.795361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.444 [2024-12-07 17:36:54.795369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.444 [2024-12-07 17:36:54.795378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.795428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.444 [2024-12-07 17:36:54.795439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.444 [2024-12-07 17:36:54.795447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.444 [2024-12-07 17:36:54.795455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.444 [2024-12-07 17:36:54.795476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.444 [2024-12-07 17:36:54.795486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.444 [2024-12-07 17:36:54.795495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.444 [2024-12-07 17:36:54.795503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.881064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.881138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.705 [2024-12-07 17:36:54.881152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.881161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.950525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.950583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.705 [2024-12-07 17:36:54.950596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.950605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.950671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.950681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.705 [2024-12-07 17:36:54.950690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.950698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.950731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.950747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.705 [2024-12-07 17:36:54.950756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.950765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.950867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.950879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.705 [2024-12-07 17:36:54.950888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.950896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.950934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.950945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.705 [2024-12-07 17:36:54.950957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.950967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.951028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.951039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.705 [2024-12-07 17:36:54.951048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.951056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.951106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.705 [2024-12-07 17:36:54.951121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.705 [2024-12-07 17:36:54.951130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.705 [2024-12-07 17:36:54.951140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.705 [2024-12-07 17:36:54.951295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.459 ms, result 0 00:20:22.647 00:20:22.647 00:20:22.647 17:36:55 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:22.647 17:36:55 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:23.218 17:36:56 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:23.218 [2024-12-07 17:36:56.393921] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:23.218 [2024-12-07 17:36:56.394089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76936 ] 00:20:23.218 [2024-12-07 17:36:56.559937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.479 [2024-12-07 17:36:56.678899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.740 [2024-12-07 17:36:56.973758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.740 [2024-12-07 17:36:56.973843] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.003 [2024-12-07 17:36:57.136575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.003 [2024-12-07 17:36:57.136635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.003 [2024-12-07 17:36:57.136649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.003 [2024-12-07 17:36:57.136658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.003 [2024-12-07 17:36:57.139651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.003 [2024-12-07 17:36:57.139702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.003 [2024-12-07 17:36:57.139713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.972 ms 00:20:24.003 [2024-12-07 17:36:57.139721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.003 [2024-12-07 17:36:57.139838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.003 [2024-12-07 17:36:57.140605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.003 [2024-12-07 17:36:57.140632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.003 [2024-12-07 17:36:57.140641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.003 [2024-12-07 17:36:57.140652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:20:24.003 [2024-12-07 17:36:57.140659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.003 [2024-12-07 17:36:57.142463] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:24.003 [2024-12-07 17:36:57.156740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.003 [2024-12-07 17:36:57.156788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:24.003 [2024-12-07 17:36:57.156802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.278 ms 00:20:24.003 [2024-12-07 17:36:57.156810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.156929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.156942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:24.004 [2024-12-07 17:36:57.156953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:24.004 [2024-12-07 17:36:57.156961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.164778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.164819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.004 [2024-12-07 17:36:57.164829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.753 ms 00:20:24.004 [2024-12-07 17:36:57.164837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.164943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.164954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.004 [2024-12-07 17:36:57.164964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:24.004 [2024-12-07 17:36:57.164972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.165036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.165045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.004 [2024-12-07 17:36:57.165054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:24.004 [2024-12-07 17:36:57.165062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.165083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:24.004 [2024-12-07 17:36:57.169101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.169140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.004 [2024-12-07 17:36:57.169150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.022 ms 00:20:24.004 [2024-12-07 17:36:57.169158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.169237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.169248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.004 [2024-12-07 17:36:57.169257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:24.004 [2024-12-07 17:36:57.169265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.169291] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:24.004 [2024-12-07 17:36:57.169314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:24.004 [2024-12-07 17:36:57.169352] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:24.004 [2024-12-07 17:36:57.169374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:24.004 [2024-12-07 17:36:57.169480] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.004 [2024-12-07 17:36:57.169493] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.004 [2024-12-07 17:36:57.169505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:24.004 [2024-12-07 17:36:57.169518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.004 [2024-12-07 17:36:57.169528] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.004 [2024-12-07 17:36:57.169537] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:24.004 [2024-12-07 17:36:57.169562] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.004 [2024-12-07 17:36:57.169570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.004 [2024-12-07 17:36:57.169578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.004 [2024-12-07 17:36:57.169586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.169594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.004 [2024-12-07 17:36:57.169602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:24.004 [2024-12-07 17:36:57.169610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.169699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.004 [2024-12-07 17:36:57.169713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.004 [2024-12-07 17:36:57.169723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:24.004 [2024-12-07 17:36:57.169732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.004 [2024-12-07 17:36:57.169837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.004 [2024-12-07 17:36:57.169856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.004 [2024-12-07 17:36:57.169865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.004 [2024-12-07 17:36:57.169873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.169881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.004 [2024-12-07 17:36:57.169888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.169895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:24.004 [2024-12-07 17:36:57.169904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.004 [2024-12-07 17:36:57.169912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:24.004 [2024-12-07 17:36:57.169919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.004 [2024-12-07 17:36:57.169926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.004 [2024-12-07 17:36:57.169940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:24.004 [2024-12-07 17:36:57.169947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.004 [2024-12-07 17:36:57.169954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.004 [2024-12-07 17:36:57.169961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:24.004 [2024-12-07 17:36:57.169968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.169975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.004 [2024-12-07 17:36:57.170001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.004 [2024-12-07 17:36:57.170022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.004 [2024-12-07 17:36:57.170041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.004 [2024-12-07 17:36:57.170061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.004 [2024-12-07 17:36:57.170082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.004 [2024-12-07 17:36:57.170103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.004 [2024-12-07 17:36:57.170117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.004 [2024-12-07 17:36:57.170126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:24.004 [2024-12-07 17:36:57.170134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.004 [2024-12-07 17:36:57.170140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.004 [2024-12-07 17:36:57.170147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:24.004 [2024-12-07 17:36:57.170156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.004 [2024-12-07 17:36:57.170171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:24.004 [2024-12-07 17:36:57.170178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.004 [2024-12-07 17:36:57.170193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.004 [2024-12-07 17:36:57.170203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.004 [2024-12-07 17:36:57.170219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.004 [2024-12-07 17:36:57.170226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.004 [2024-12-07 17:36:57.170232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.004 [2024-12-07 17:36:57.170239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.004 [2024-12-07 17:36:57.170245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.004 [2024-12-07 17:36:57.170251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.004 [2024-12-07 17:36:57.170260] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.004 [2024-12-07 17:36:57.170269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.004 [2024-12-07 17:36:57.170277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:24.004 [2024-12-07 17:36:57.170284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:24.005 [2024-12-07 17:36:57.170292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:24.005 [2024-12-07 17:36:57.170299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:24.005 [2024-12-07 17:36:57.170305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:24.005 [2024-12-07 17:36:57.170312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:24.005 [2024-12-07 17:36:57.170318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:24.005 [2024-12-07 17:36:57.170325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:24.005 [2024-12-07 17:36:57.170332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:24.005 [2024-12-07 17:36:57.170340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:24.005 [2024-12-07 17:36:57.170375] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.005 [2024-12-07 17:36:57.170383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.005 [2024-12-07 17:36:57.170400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.005 [2024-12-07 17:36:57.170409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.005 [2024-12-07 17:36:57.170416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.005 [2024-12-07 17:36:57.170424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.170434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.005 [2024-12-07 17:36:57.170442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:20:24.005 [2024-12-07 17:36:57.170458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.202127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.202173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.005 [2024-12-07 17:36:57.202185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.607 ms 00:20:24.005 [2024-12-07 17:36:57.202194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.202327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.202339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:24.005 [2024-12-07 17:36:57.202348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:24.005 [2024-12-07 17:36:57.202357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.246514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.246569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.005 [2024-12-07 17:36:57.246586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.131 ms 00:20:24.005 [2024-12-07 17:36:57.246595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.246705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.246718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.005 [2024-12-07 17:36:57.246728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:24.005 [2024-12-07 17:36:57.246736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.247279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.247312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.005 [2024-12-07 17:36:57.247332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:20:24.005 [2024-12-07 17:36:57.247340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.247498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.247509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.005 [2024-12-07 17:36:57.247519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:24.005 [2024-12-07 17:36:57.247528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.263738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.263786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.005 [2024-12-07 17:36:57.263797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.186 ms 00:20:24.005 [2024-12-07 17:36:57.263805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.278145] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:24.005 [2024-12-07 17:36:57.278193] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:24.005 [2024-12-07 17:36:57.278206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.278214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:24.005 [2024-12-07 17:36:57.278225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.290 ms 00:20:24.005 [2024-12-07 17:36:57.278233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.304252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.304300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:24.005 [2024-12-07 17:36:57.304313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.921 ms 00:20:24.005 [2024-12-07 17:36:57.304321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.317428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.317472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:24.005 [2024-12-07 17:36:57.317483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.014 ms 00:20:24.005 [2024-12-07 17:36:57.317491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.329917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.329961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:24.005 [2024-12-07 17:36:57.329974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.328 ms 00:20:24.005 [2024-12-07 17:36:57.329994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.005 [2024-12-07 17:36:57.330649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.005 [2024-12-07 17:36:57.330677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:24.005 [2024-12-07 17:36:57.330689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:20:24.005 [2024-12-07 17:36:57.330696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.397434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.397502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:24.267 [2024-12-07 17:36:57.397519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.711 ms 00:20:24.267 [2024-12-07 17:36:57.397528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.408676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:24.267 [2024-12-07 17:36:57.427443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.427496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:24.267 [2024-12-07 17:36:57.427511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.794 ms 00:20:24.267 [2024-12-07 17:36:57.427526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.427621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.427634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:24.267 [2024-12-07 17:36:57.427643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:24.267 [2024-12-07 17:36:57.427652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.427708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.427720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:24.267 [2024-12-07 17:36:57.427729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:24.267 [2024-12-07 17:36:57.427742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.427774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.427783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:24.267 [2024-12-07 17:36:57.427791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:24.267 [2024-12-07 17:36:57.427799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.267 [2024-12-07 17:36:57.427837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:24.267 [2024-12-07 17:36:57.427849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.267 [2024-12-07 17:36:57.427857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:24.267 [2024-12-07 17:36:57.427865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:24.268 [2024-12-07 17:36:57.427873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.268 [2024-12-07 17:36:57.454437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.268 [2024-12-07 17:36:57.454485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:24.268 [2024-12-07 17:36:57.454497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.542 ms 00:20:24.268 [2024-12-07 17:36:57.454506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.268 [2024-12-07 17:36:57.454649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.268 [2024-12-07 17:36:57.454662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:24.268 [2024-12-07 17:36:57.454673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:24.268 [2024-12-07 17:36:57.454681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.268 [2024-12-07 17:36:57.456013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.268 [2024-12-07 17:36:57.459446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 319.097 ms, result 0 00:20:24.268 [2024-12-07 17:36:57.460906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.268 [2024-12-07 17:36:57.474568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.528  [2024-12-07T17:36:57.910Z] Copying: 4096/4096 [kB] (average 9990 kBps)[2024-12-07 17:36:57.887816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.528 [2024-12-07 17:36:57.896864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.528 [2024-12-07 17:36:57.896921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.528 [2024-12-07 17:36:57.896942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.528 [2024-12-07 17:36:57.896950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.528 [2024-12-07 17:36:57.896973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:24.528 [2024-12-07 17:36:57.899972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.528 [2024-12-07 17:36:57.900017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.528 [2024-12-07 17:36:57.900028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:20:24.528 [2024-12-07 17:36:57.900036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.528 [2024-12-07 17:36:57.903338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.528 [2024-12-07 17:36:57.903384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.528 [2024-12-07 17:36:57.903395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:20:24.528 [2024-12-07 17:36:57.903403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.528 [2024-12-07 17:36:57.907917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.528 [2024-12-07 17:36:57.907956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.528 [2024-12-07 17:36:57.907967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:20:24.528 [2024-12-07 17:36:57.907977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:57.914928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:57.914967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.794 [2024-12-07 17:36:57.914987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:20:24.794 [2024-12-07 17:36:57.914996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:57.940164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:57.940211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.794 [2024-12-07 17:36:57.940223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:20:24.794 [2024-12-07 17:36:57.940231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:57.955977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:57.956043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.794 [2024-12-07 17:36:57.956055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.697 ms 00:20:24.794 [2024-12-07 17:36:57.956063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:57.956236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:57.956250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.794 [2024-12-07 17:36:57.956268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:24.794 [2024-12-07 17:36:57.956276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:57.981813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:57.981858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.794 [2024-12-07 17:36:57.981869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.519 ms 00:20:24.794 [2024-12-07 17:36:57.981876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:58.007590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:58.007633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.794 [2024-12-07 17:36:58.007645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.654 ms 00:20:24.794 [2024-12-07 17:36:58.007652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:58.032641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:58.032686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.794 [2024-12-07 17:36:58.032697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.930 ms 00:20:24.794 [2024-12-07 17:36:58.032705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:58.057469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.794 [2024-12-07 17:36:58.057515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.794 [2024-12-07 17:36:58.057526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.677 ms 00:20:24.794 [2024-12-07 17:36:58.057533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.794 [2024-12-07 17:36:58.057608] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.794 [2024-12-07 17:36:58.057624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.794 [2024-12-07 17:36:58.057834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.057995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.795 [2024-12-07 17:36:58.058441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.795 [2024-12-07 17:36:58.058450] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:24.795 [2024-12-07 17:36:58.058459] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.795 [2024-12-07 17:36:58.058467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.795 [2024-12-07 17:36:58.058475] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.795 [2024-12-07 17:36:58.058484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.795 [2024-12-07 17:36:58.058492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.795 [2024-12-07 17:36:58.058500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.795 [2024-12-07 17:36:58.058511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.795 [2024-12-07 17:36:58.058520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.795 [2024-12-07 17:36:58.058526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.795 [2024-12-07 17:36:58.058534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.795 [2024-12-07 17:36:58.058542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.795 [2024-12-07 17:36:58.058551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:20:24.795 [2024-12-07 17:36:58.058560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.795 [2024-12-07 17:36:58.072026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.795 [2024-12-07 17:36:58.072065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.795 [2024-12-07 17:36:58.072075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.432 ms 00:20:24.795 [2024-12-07 17:36:58.072083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.795 [2024-12-07 17:36:58.072476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.795 [2024-12-07 17:36:58.072488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.795 [2024-12-07 17:36:58.072498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:20:24.796 [2024-12-07 17:36:58.072505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.796 [2024-12-07 17:36:58.111654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.796 [2024-12-07 17:36:58.111700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.796 [2024-12-07 17:36:58.111713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.796 [2024-12-07 17:36:58.111728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.796 [2024-12-07 17:36:58.111805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.796 [2024-12-07 17:36:58.111814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.796 [2024-12-07 17:36:58.111822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.796 [2024-12-07 17:36:58.111830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.796 [2024-12-07 17:36:58.111880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.796 [2024-12-07 17:36:58.111890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.796 [2024-12-07 17:36:58.111899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.796 [2024-12-07 17:36:58.111906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.796 [2024-12-07 17:36:58.111928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.796 [2024-12-07 17:36:58.111936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.796 [2024-12-07 17:36:58.111944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.796 [2024-12-07 17:36:58.111951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.196568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.196626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.057 [2024-12-07 17:36:58.196641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.057 [2024-12-07 17:36:58.196656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.265714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.265772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.057 [2024-12-07 17:36:58.265786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.057 [2024-12-07 17:36:58.265795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.265853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.265863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.057 [2024-12-07 17:36:58.265872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.057 [2024-12-07 17:36:58.265882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.265915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.265931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.057 [2024-12-07 17:36:58.265940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.057 [2024-12-07 17:36:58.265949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.266073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.266086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.057 [2024-12-07 17:36:58.266095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.057 [2024-12-07 17:36:58.266103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.057 [2024-12-07 17:36:58.266136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.057 [2024-12-07 17:36:58.266146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.057 [2024-12-07 17:36:58.266158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.058 [2024-12-07 17:36:58.266166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.058 [2024-12-07 17:36:58.266207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.058 [2024-12-07 17:36:58.266217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.058 [2024-12-07 17:36:58.266226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.058 [2024-12-07 17:36:58.266234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.058 [2024-12-07 17:36:58.266281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.058 [2024-12-07 17:36:58.266297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.058 [2024-12-07 17:36:58.266306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.058 [2024-12-07 17:36:58.266314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.058 [2024-12-07 17:36:58.266468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.589 ms, result 0 00:20:26.002 00:20:26.002 00:20:26.002 17:36:59 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76961 00:20:26.002 17:36:59 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76961 00:20:26.002 17:36:59 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76961 ']' 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.002 17:36:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:26.002 [2024-12-07 17:36:59.145443] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:26.002 [2024-12-07 17:36:59.145622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 00:20:26.002 [2024-12-07 17:36:59.313071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.264 [2024-12-07 17:36:59.435106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.837 17:37:00 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.837 17:37:00 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:26.837 17:37:00 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:27.099 [2024-12-07 17:37:00.344838] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.099 [2024-12-07 17:37:00.344920] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.363 [2024-12-07 17:37:00.524086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.524149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:27.363 [2024-12-07 17:37:00.524166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:27.363 [2024-12-07 17:37:00.524175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.527181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.527235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.363 [2024-12-07 17:37:00.527248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:20:27.363 [2024-12-07 17:37:00.527257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.527375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:27.363 [2024-12-07 17:37:00.528223] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:27.363 [2024-12-07 17:37:00.528267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.528276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.363 [2024-12-07 17:37:00.528287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:20:27.363 [2024-12-07 17:37:00.528296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.530205] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:27.363 [2024-12-07 17:37:00.544611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.544668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:27.363 [2024-12-07 17:37:00.544683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.411 ms 00:20:27.363 [2024-12-07 17:37:00.544693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.544807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.544822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:27.363 [2024-12-07 17:37:00.544831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:27.363 [2024-12-07 17:37:00.544841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.552743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.552792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.363 [2024-12-07 17:37:00.552802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.846 ms 00:20:27.363 [2024-12-07 17:37:00.552812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.552932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.552946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.363 [2024-12-07 17:37:00.552957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:27.363 [2024-12-07 17:37:00.552971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.553031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.553044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:27.363 [2024-12-07 17:37:00.553053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:27.363 [2024-12-07 17:37:00.553063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.553087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:27.363 [2024-12-07 17:37:00.557074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.557112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.363 [2024-12-07 17:37:00.557125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.990 ms 00:20:27.363 [2024-12-07 17:37:00.557133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.557211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.557221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:27.363 [2024-12-07 17:37:00.557233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:27.363 [2024-12-07 17:37:00.557243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.557267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:27.363 [2024-12-07 17:37:00.557291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:27.363 [2024-12-07 17:37:00.557341] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:27.363 [2024-12-07 17:37:00.557358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:27.363 [2024-12-07 17:37:00.557467] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:27.363 [2024-12-07 17:37:00.557479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:27.363 [2024-12-07 17:37:00.557495] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:27.363 [2024-12-07 17:37:00.557506] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:27.363 [2024-12-07 17:37:00.557518] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:27.363 [2024-12-07 17:37:00.557528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:27.363 [2024-12-07 17:37:00.557538] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:27.363 [2024-12-07 17:37:00.557563] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:27.363 [2024-12-07 17:37:00.557575] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:27.363 [2024-12-07 17:37:00.557584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.557594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:27.363 [2024-12-07 17:37:00.557602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:27.363 [2024-12-07 17:37:00.557611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.557704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.363 [2024-12-07 17:37:00.557718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:27.363 [2024-12-07 17:37:00.557726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:27.363 [2024-12-07 17:37:00.557735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.363 [2024-12-07 17:37:00.557837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:27.363 [2024-12-07 17:37:00.557858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:27.363 [2024-12-07 17:37:00.557866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.363 [2024-12-07 17:37:00.557877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.363 [2024-12-07 17:37:00.557885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:27.363 [2024-12-07 17:37:00.557896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:27.363 [2024-12-07 17:37:00.557902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:27.363 [2024-12-07 17:37:00.557913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:27.363 [2024-12-07 17:37:00.557922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:27.363 [2024-12-07 17:37:00.557931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.363 [2024-12-07 17:37:00.557938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:27.363 [2024-12-07 17:37:00.557947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:27.363 [2024-12-07 17:37:00.557954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.363 [2024-12-07 17:37:00.557962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:27.363 [2024-12-07 17:37:00.557969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:27.363 [2024-12-07 17:37:00.557997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.363 [2024-12-07 17:37:00.558005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:27.364 [2024-12-07 17:37:00.558015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:27.364 [2024-12-07 17:37:00.558046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:27.364 [2024-12-07 17:37:00.558073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:27.364 [2024-12-07 17:37:00.558096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:27.364 [2024-12-07 17:37:00.558123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:27.364 [2024-12-07 17:37:00.558146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.364 [2024-12-07 17:37:00.558162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:27.364 [2024-12-07 17:37:00.558171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:27.364 [2024-12-07 17:37:00.558177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.364 [2024-12-07 17:37:00.558186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:27.364 [2024-12-07 17:37:00.558193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:27.364 [2024-12-07 17:37:00.558203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:27.364 [2024-12-07 17:37:00.558222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:27.364 [2024-12-07 17:37:00.558229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:27.364 [2024-12-07 17:37:00.558248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:27.364 [2024-12-07 17:37:00.558257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.364 [2024-12-07 17:37:00.558275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:27.364 [2024-12-07 17:37:00.558281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:27.364 [2024-12-07 17:37:00.558290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:27.364 [2024-12-07 17:37:00.558297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:27.364 [2024-12-07 17:37:00.558305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:27.364 [2024-12-07 17:37:00.558311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:27.364 [2024-12-07 17:37:00.558321] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:27.364 [2024-12-07 17:37:00.558332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:27.364 [2024-12-07 17:37:00.558353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:27.364 [2024-12-07 17:37:00.558364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:27.364 [2024-12-07 17:37:00.558372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:27.364 [2024-12-07 17:37:00.558383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:27.364 [2024-12-07 17:37:00.558391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:27.364 [2024-12-07 17:37:00.558401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:27.364 [2024-12-07 17:37:00.558409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:27.364 [2024-12-07 17:37:00.558418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:27.364 [2024-12-07 17:37:00.558426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:27.364 [2024-12-07 17:37:00.558469] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:27.364 [2024-12-07 17:37:00.558478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:27.364 [2024-12-07 17:37:00.558499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:27.364 [2024-12-07 17:37:00.558508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:27.364 [2024-12-07 17:37:00.558517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:27.364 [2024-12-07 17:37:00.558527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.558535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:27.364 [2024-12-07 17:37:00.558546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:20:27.364 [2024-12-07 17:37:00.558556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.590228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.590278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.364 [2024-12-07 17:37:00.590292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.610 ms 00:20:27.364 [2024-12-07 17:37:00.590302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.590436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.590448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.364 [2024-12-07 17:37:00.590459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:27.364 [2024-12-07 17:37:00.590467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.625316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.625367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.364 [2024-12-07 17:37:00.625381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.824 ms 00:20:27.364 [2024-12-07 17:37:00.625389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.625476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.625486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.364 [2024-12-07 17:37:00.625497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.364 [2024-12-07 17:37:00.625506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.626112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.626143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.364 [2024-12-07 17:37:00.626156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:20:27.364 [2024-12-07 17:37:00.626164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.626314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.626324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.364 [2024-12-07 17:37:00.626335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:27.364 [2024-12-07 17:37:00.626343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.643918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.643968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.364 [2024-12-07 17:37:00.643995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.547 ms 00:20:27.364 [2024-12-07 17:37:00.644004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.667966] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:27.364 [2024-12-07 17:37:00.668033] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:27.364 [2024-12-07 17:37:00.668054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.668065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:27.364 [2024-12-07 17:37:00.668078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:20:27.364 [2024-12-07 17:37:00.668095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.364 [2024-12-07 17:37:00.694362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.364 [2024-12-07 17:37:00.694413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:27.364 [2024-12-07 17:37:00.694428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.157 ms 00:20:27.364 [2024-12-07 17:37:00.694436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.365 [2024-12-07 17:37:00.707135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.365 [2024-12-07 17:37:00.707193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:27.365 [2024-12-07 17:37:00.707211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.600 ms 00:20:27.365 [2024-12-07 17:37:00.707219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.365 [2024-12-07 17:37:00.719919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.365 [2024-12-07 17:37:00.719968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:27.365 [2024-12-07 17:37:00.719993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.610 ms 00:20:27.365 [2024-12-07 17:37:00.720002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.365 [2024-12-07 17:37:00.720642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.365 [2024-12-07 17:37:00.720668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:27.365 [2024-12-07 17:37:00.720681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:20:27.365 [2024-12-07 17:37:00.720689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.786576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.786638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:27.627 [2024-12-07 17:37:00.786656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.856 ms 00:20:27.627 [2024-12-07 17:37:00.786665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.797949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:27.627 [2024-12-07 17:37:00.816702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.816759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:27.627 [2024-12-07 17:37:00.816775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.939 ms 00:20:27.627 [2024-12-07 17:37:00.816786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.816876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.816890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:27.627 [2024-12-07 17:37:00.816899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:27.627 [2024-12-07 17:37:00.816910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.816968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.817009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:27.627 [2024-12-07 17:37:00.817019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:27.627 [2024-12-07 17:37:00.817032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.817057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.817068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.627 [2024-12-07 17:37:00.817077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:27.627 [2024-12-07 17:37:00.817089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.817125] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:27.627 [2024-12-07 17:37:00.817139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.817150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:27.627 [2024-12-07 17:37:00.817160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:27.627 [2024-12-07 17:37:00.817168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.843620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.843674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.627 [2024-12-07 17:37:00.843691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.422 ms 00:20:27.627 [2024-12-07 17:37:00.843700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.843847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.627 [2024-12-07 17:37:00.843862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.627 [2024-12-07 17:37:00.843875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:27.627 [2024-12-07 17:37:00.843888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.627 [2024-12-07 17:37:00.845017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.627 [2024-12-07 17:37:00.848289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.588 ms, result 0 00:20:27.627 [2024-12-07 17:37:00.850288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.627 Some configs were skipped because the RPC state that can call them passed over. 00:20:27.627 17:37:00 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:27.889 [2024-12-07 17:37:01.098876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.889 [2024-12-07 17:37:01.098943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:27.889 [2024-12-07 17:37:01.098957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:20:27.889 [2024-12-07 17:37:01.098968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.889 [2024-12-07 17:37:01.099017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.416 ms, result 0 00:20:27.889 true 00:20:27.889 17:37:01 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:28.151 [2024-12-07 17:37:01.314945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.151 [2024-12-07 17:37:01.315016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:28.151 [2024-12-07 17:37:01.315031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:20:28.151 [2024-12-07 17:37:01.315040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.151 [2024-12-07 17:37:01.315080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.221 ms, result 0 00:20:28.151 true 00:20:28.151 17:37:01 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76961 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76961 ']' 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76961 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76961 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.151 killing process with pid 76961 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76961' 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76961 00:20:28.151 17:37:01 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76961 00:20:28.723 [2024-12-07 17:37:02.077262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.723 [2024-12-07 17:37:02.077308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.723 [2024-12-07 17:37:02.077319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:28.724 [2024-12-07 17:37:02.077326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.077345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:28.724 [2024-12-07 17:37:02.079450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.079476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.724 [2024-12-07 17:37:02.079488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:20:28.724 [2024-12-07 17:37:02.079494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.079731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.079740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.724 [2024-12-07 17:37:02.079748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:28.724 [2024-12-07 17:37:02.079755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.083200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.083226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.724 [2024-12-07 17:37:02.083237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:20:28.724 [2024-12-07 17:37:02.083243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.088486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.088511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.724 [2024-12-07 17:37:02.088521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:20:28.724 [2024-12-07 17:37:02.088527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.096822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.096852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.724 [2024-12-07 17:37:02.096862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.241 ms 00:20:28.724 [2024-12-07 17:37:02.096867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.102986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.103013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.724 [2024-12-07 17:37:02.103022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.083 ms 00:20:28.724 [2024-12-07 17:37:02.103029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.724 [2024-12-07 17:37:02.103133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.724 [2024-12-07 17:37:02.103141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.724 [2024-12-07 17:37:02.103149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:28.724 [2024-12-07 17:37:02.103155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.985 [2024-12-07 17:37:02.111666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.985 [2024-12-07 17:37:02.111691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.985 [2024-12-07 17:37:02.111699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.494 ms 00:20:28.985 [2024-12-07 17:37:02.111705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.985 [2024-12-07 17:37:02.119741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.985 [2024-12-07 17:37:02.119766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.985 [2024-12-07 17:37:02.119778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.007 ms 00:20:28.985 [2024-12-07 17:37:02.119783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.985 [2024-12-07 17:37:02.127276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.985 [2024-12-07 17:37:02.127302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.985 [2024-12-07 17:37:02.127310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.454 ms 00:20:28.985 [2024-12-07 17:37:02.127315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.985 [2024-12-07 17:37:02.134712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.985 [2024-12-07 17:37:02.134736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.985 [2024-12-07 17:37:02.134744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.349 ms 00:20:28.985 [2024-12-07 17:37:02.134750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.985 [2024-12-07 17:37:02.134776] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.985 [2024-12-07 17:37:02.134787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.985 [2024-12-07 17:37:02.134868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.134994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.986 [2024-12-07 17:37:02.135319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.987 [2024-12-07 17:37:02.135456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.987 [2024-12-07 17:37:02.135467] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:28.987 [2024-12-07 17:37:02.135475] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.987 [2024-12-07 17:37:02.135482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.987 [2024-12-07 17:37:02.135488] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.987 [2024-12-07 17:37:02.135495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.987 [2024-12-07 17:37:02.135501] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.987 [2024-12-07 17:37:02.135508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.987 [2024-12-07 17:37:02.135514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.987 [2024-12-07 17:37:02.135520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.987 [2024-12-07 17:37:02.135525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.987 [2024-12-07 17:37:02.135532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.987 [2024-12-07 17:37:02.135538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.987 [2024-12-07 17:37:02.135545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:20:28.987 [2024-12-07 17:37:02.135551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.145278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.987 [2024-12-07 17:37:02.145303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.987 [2024-12-07 17:37:02.145314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.709 ms 00:20:28.987 [2024-12-07 17:37:02.145319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.145614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.987 [2024-12-07 17:37:02.145627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.987 [2024-12-07 17:37:02.145636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:28.987 [2024-12-07 17:37:02.145642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.180340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.180368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.987 [2024-12-07 17:37:02.180378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.180384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.180455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.180462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.987 [2024-12-07 17:37:02.180472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.180477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.180511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.180518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.987 [2024-12-07 17:37:02.180526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.180533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.180547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.180554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.987 [2024-12-07 17:37:02.180560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.180567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.240590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.240620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.987 [2024-12-07 17:37:02.240630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.240636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.289438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.289473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.987 [2024-12-07 17:37:02.289482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.289490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.289554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.289562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.987 [2024-12-07 17:37:02.289572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.289578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.289601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.289608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.987 [2024-12-07 17:37:02.289615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.289620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.987 [2024-12-07 17:37:02.289694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.987 [2024-12-07 17:37:02.289702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.987 [2024-12-07 17:37:02.289710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.987 [2024-12-07 17:37:02.289715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.988 [2024-12-07 17:37:02.289741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.988 [2024-12-07 17:37:02.289747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.988 [2024-12-07 17:37:02.289754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.988 [2024-12-07 17:37:02.289760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.988 [2024-12-07 17:37:02.289790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.988 [2024-12-07 17:37:02.289796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.988 [2024-12-07 17:37:02.289805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.988 [2024-12-07 17:37:02.289811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.988 [2024-12-07 17:37:02.289845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.988 [2024-12-07 17:37:02.289852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.988 [2024-12-07 17:37:02.289860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.988 [2024-12-07 17:37:02.289866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.988 [2024-12-07 17:37:02.289965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.686 ms, result 0 00:20:29.556 17:37:02 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:29.557 [2024-12-07 17:37:02.877320] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:29.557 [2024-12-07 17:37:02.877459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77014 ] 00:20:29.815 [2024-12-07 17:37:03.036049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.815 [2024-12-07 17:37:03.115742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.074 [2024-12-07 17:37:03.325067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.074 [2024-12-07 17:37:03.325118] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.333 [2024-12-07 17:37:03.478827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.478865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:30.333 [2024-12-07 17:37:03.478875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:30.333 [2024-12-07 17:37:03.478881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.480942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.480973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.333 [2024-12-07 17:37:03.480994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:20:30.333 [2024-12-07 17:37:03.481001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.481056] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:30.333 [2024-12-07 17:37:03.481564] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:30.333 [2024-12-07 17:37:03.481582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.481588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.333 [2024-12-07 17:37:03.481595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:20:30.333 [2024-12-07 17:37:03.481600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.482555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:30.333 [2024-12-07 17:37:03.492339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.492367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:30.333 [2024-12-07 17:37:03.492375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.785 ms 00:20:30.333 [2024-12-07 17:37:03.492381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.492447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.492456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:30.333 [2024-12-07 17:37:03.492463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:30.333 [2024-12-07 17:37:03.492468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.496704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.496727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.333 [2024-12-07 17:37:03.496735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.207 ms 00:20:30.333 [2024-12-07 17:37:03.496742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.496818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.496829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.333 [2024-12-07 17:37:03.496835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:30.333 [2024-12-07 17:37:03.496844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.496867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.496874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:30.333 [2024-12-07 17:37:03.496880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:30.333 [2024-12-07 17:37:03.496885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.496900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:30.333 [2024-12-07 17:37:03.499583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.499606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.333 [2024-12-07 17:37:03.499613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:20:30.333 [2024-12-07 17:37:03.499618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.499647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.333 [2024-12-07 17:37:03.499654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:30.333 [2024-12-07 17:37:03.499661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:30.333 [2024-12-07 17:37:03.499666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.333 [2024-12-07 17:37:03.499681] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:30.333 [2024-12-07 17:37:03.499696] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:30.333 [2024-12-07 17:37:03.499722] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:30.333 [2024-12-07 17:37:03.499733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:30.333 [2024-12-07 17:37:03.499812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:30.333 [2024-12-07 17:37:03.499820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:30.333 [2024-12-07 17:37:03.499828] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:30.333 [2024-12-07 17:37:03.499838] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:30.334 [2024-12-07 17:37:03.499845] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:30.334 [2024-12-07 17:37:03.499851] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:30.334 [2024-12-07 17:37:03.499856] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:30.334 [2024-12-07 17:37:03.499862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:30.334 [2024-12-07 17:37:03.499867] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:30.334 [2024-12-07 17:37:03.499873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.334 [2024-12-07 17:37:03.499878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:30.334 [2024-12-07 17:37:03.499884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:30.334 [2024-12-07 17:37:03.499889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.334 [2024-12-07 17:37:03.499955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.334 [2024-12-07 17:37:03.499968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:30.334 [2024-12-07 17:37:03.499974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:30.334 [2024-12-07 17:37:03.499988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.334 [2024-12-07 17:37:03.500065] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:30.334 [2024-12-07 17:37:03.500074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:30.334 [2024-12-07 17:37:03.500081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:30.334 [2024-12-07 17:37:03.500098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:30.334 [2024-12-07 17:37:03.500114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.334 [2024-12-07 17:37:03.500124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:30.334 [2024-12-07 17:37:03.500134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:30.334 [2024-12-07 17:37:03.500139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.334 [2024-12-07 17:37:03.500144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:30.334 [2024-12-07 17:37:03.500150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:30.334 [2024-12-07 17:37:03.500156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:30.334 [2024-12-07 17:37:03.500166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:30.334 [2024-12-07 17:37:03.500181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:30.334 [2024-12-07 17:37:03.500196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:30.334 [2024-12-07 17:37:03.500211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:30.334 [2024-12-07 17:37:03.500228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:30.334 [2024-12-07 17:37:03.500242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.334 [2024-12-07 17:37:03.500252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:30.334 [2024-12-07 17:37:03.500258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:30.334 [2024-12-07 17:37:03.500263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.334 [2024-12-07 17:37:03.500269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:30.334 [2024-12-07 17:37:03.500274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:30.334 [2024-12-07 17:37:03.500279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:30.334 [2024-12-07 17:37:03.500289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:30.334 [2024-12-07 17:37:03.500294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:30.334 [2024-12-07 17:37:03.500306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:30.334 [2024-12-07 17:37:03.500314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.334 [2024-12-07 17:37:03.500327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:30.334 [2024-12-07 17:37:03.500333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:30.334 [2024-12-07 17:37:03.500338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:30.334 [2024-12-07 17:37:03.500344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:30.334 [2024-12-07 17:37:03.500349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:30.334 [2024-12-07 17:37:03.500354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:30.334 [2024-12-07 17:37:03.500360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:30.334 [2024-12-07 17:37:03.500367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:30.334 [2024-12-07 17:37:03.500378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:30.334 [2024-12-07 17:37:03.500383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:30.334 [2024-12-07 17:37:03.500388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:30.334 [2024-12-07 17:37:03.500393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:30.334 [2024-12-07 17:37:03.500399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:30.334 [2024-12-07 17:37:03.500403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:30.334 [2024-12-07 17:37:03.500408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:30.334 [2024-12-07 17:37:03.500414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:30.334 [2024-12-07 17:37:03.500419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:30.334 [2024-12-07 17:37:03.500445] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:30.334 [2024-12-07 17:37:03.500453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:30.334 [2024-12-07 17:37:03.500464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:30.334 [2024-12-07 17:37:03.500469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:30.334 [2024-12-07 17:37:03.500475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:30.334 [2024-12-07 17:37:03.500482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.334 [2024-12-07 17:37:03.500489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:30.334 [2024-12-07 17:37:03.500495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:20:30.334 [2024-12-07 17:37:03.500501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.334 [2024-12-07 17:37:03.521032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.334 [2024-12-07 17:37:03.521059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.334 [2024-12-07 17:37:03.521067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.490 ms 00:20:30.334 [2024-12-07 17:37:03.521073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.334 [2024-12-07 17:37:03.521164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.334 [2024-12-07 17:37:03.521172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:30.335 [2024-12-07 17:37:03.521179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:30.335 [2024-12-07 17:37:03.521184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.557215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.557247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.335 [2024-12-07 17:37:03.557258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.014 ms 00:20:30.335 [2024-12-07 17:37:03.557265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.557324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.557333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.335 [2024-12-07 17:37:03.557340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:30.335 [2024-12-07 17:37:03.557345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.557643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.557655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.335 [2024-12-07 17:37:03.557662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:30.335 [2024-12-07 17:37:03.557672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.557777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.557791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.335 [2024-12-07 17:37:03.557797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:30.335 [2024-12-07 17:37:03.557803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.568491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.568516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.335 [2024-12-07 17:37:03.568524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.672 ms 00:20:30.335 [2024-12-07 17:37:03.568529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.578210] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:30.335 [2024-12-07 17:37:03.578238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:30.335 [2024-12-07 17:37:03.578247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.578254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:30.335 [2024-12-07 17:37:03.578261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.645 ms 00:20:30.335 [2024-12-07 17:37:03.578266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.596662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.596688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:30.335 [2024-12-07 17:37:03.596697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.349 ms 00:20:30.335 [2024-12-07 17:37:03.596704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.605875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.605901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:30.335 [2024-12-07 17:37:03.605908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.119 ms 00:20:30.335 [2024-12-07 17:37:03.605914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.614746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.614781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:30.335 [2024-12-07 17:37:03.614789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.792 ms 00:20:30.335 [2024-12-07 17:37:03.614795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.615251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.615271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:30.335 [2024-12-07 17:37:03.615278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:20:30.335 [2024-12-07 17:37:03.615283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.659826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.659862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:30.335 [2024-12-07 17:37:03.659872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.525 ms 00:20:30.335 [2024-12-07 17:37:03.659879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.667621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:30.335 [2024-12-07 17:37:03.678821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.678849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:30.335 [2024-12-07 17:37:03.678859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.885 ms 00:20:30.335 [2024-12-07 17:37:03.678868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.678934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.678942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:30.335 [2024-12-07 17:37:03.678949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:30.335 [2024-12-07 17:37:03.678955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.678998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.679006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:30.335 [2024-12-07 17:37:03.679012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:30.335 [2024-12-07 17:37:03.679021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.679045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.679052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:30.335 [2024-12-07 17:37:03.679058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:30.335 [2024-12-07 17:37:03.679064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.679087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:30.335 [2024-12-07 17:37:03.679095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.679101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:30.335 [2024-12-07 17:37:03.679107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:30.335 [2024-12-07 17:37:03.679112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.697578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.697606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:30.335 [2024-12-07 17:37:03.697615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.451 ms 00:20:30.335 [2024-12-07 17:37:03.697621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.697686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.335 [2024-12-07 17:37:03.697694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:30.335 [2024-12-07 17:37:03.697701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:30.335 [2024-12-07 17:37:03.697707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.335 [2024-12-07 17:37:03.698314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.335 [2024-12-07 17:37:03.700504] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.269 ms, result 0 00:20:30.335 [2024-12-07 17:37:03.701373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.593 [2024-12-07 17:37:03.716044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.534  [2024-12-07T17:37:05.856Z] Copying: 16/256 [MB] (16 MBps) [2024-12-07T17:37:06.796Z] Copying: 26/256 [MB] (10 MBps) [2024-12-07T17:37:08.180Z] Copying: 43/256 [MB] (16 MBps) [2024-12-07T17:37:09.121Z] Copying: 53/256 [MB] (10 MBps) [2024-12-07T17:37:10.063Z] Copying: 69/256 [MB] (15 MBps) [2024-12-07T17:37:11.009Z] Copying: 86/256 [MB] (17 MBps) [2024-12-07T17:37:11.954Z] Copying: 105/256 [MB] (18 MBps) [2024-12-07T17:37:12.900Z] Copying: 116/256 [MB] (11 MBps) [2024-12-07T17:37:13.845Z] Copying: 141/256 [MB] (24 MBps) [2024-12-07T17:37:14.791Z] Copying: 160/256 [MB] (19 MBps) [2024-12-07T17:37:16.178Z] Copying: 173/256 [MB] (12 MBps) [2024-12-07T17:37:16.829Z] Copying: 186/256 [MB] (13 MBps) [2024-12-07T17:37:17.799Z] Copying: 198/256 [MB] (11 MBps) [2024-12-07T17:37:19.183Z] Copying: 210/256 [MB] (11 MBps) [2024-12-07T17:37:20.118Z] Copying: 220/256 [MB] (10 MBps) [2024-12-07T17:37:21.056Z] Copying: 236/256 [MB] (15 MBps) [2024-12-07T17:37:21.315Z] Copying: 251/256 [MB] (15 MBps) [2024-12-07T17:37:21.578Z] Copying: 256/256 [MB] (average 14 MBps)[2024-12-07 17:37:21.314529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.196 [2024-12-07 17:37:21.325350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.325403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.196 [2024-12-07 17:37:21.325427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.196 [2024-12-07 17:37:21.325437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.325465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:48.196 [2024-12-07 17:37:21.328435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.328483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.196 [2024-12-07 17:37:21.328495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:20:48.196 [2024-12-07 17:37:21.328505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.328796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.328809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.196 [2024-12-07 17:37:21.328819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:20:48.196 [2024-12-07 17:37:21.328827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.332552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.332581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.196 [2024-12-07 17:37:21.332592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:20:48.196 [2024-12-07 17:37:21.332600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.342156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.342203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.196 [2024-12-07 17:37:21.342215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.534 ms 00:20:48.196 [2024-12-07 17:37:21.342224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.369765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.369817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.196 [2024-12-07 17:37:21.369830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.457 ms 00:20:48.196 [2024-12-07 17:37:21.369838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.387692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.387741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.196 [2024-12-07 17:37:21.387762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.785 ms 00:20:48.196 [2024-12-07 17:37:21.387771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.387938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.387953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.196 [2024-12-07 17:37:21.387972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:48.196 [2024-12-07 17:37:21.388004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.415960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.416024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:48.196 [2024-12-07 17:37:21.416037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.936 ms 00:20:48.196 [2024-12-07 17:37:21.416045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.441967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.442028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:48.196 [2024-12-07 17:37:21.442040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.835 ms 00:20:48.196 [2024-12-07 17:37:21.442048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.474450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.474510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.196 [2024-12-07 17:37:21.474526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.338 ms 00:20:48.196 [2024-12-07 17:37:21.474535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.500337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.196 [2024-12-07 17:37:21.500386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.196 [2024-12-07 17:37:21.500399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.682 ms 00:20:48.196 [2024-12-07 17:37:21.500408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.196 [2024-12-07 17:37:21.500476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.196 [2024-12-07 17:37:21.500494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.196 [2024-12-07 17:37:21.500785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.500972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.197 [2024-12-07 17:37:21.501364] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.197 [2024-12-07 17:37:21.501373] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84040001-4357-46bc-af25-3c5f953812bb 00:20:48.197 [2024-12-07 17:37:21.501382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.197 [2024-12-07 17:37:21.501392] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.197 [2024-12-07 17:37:21.501400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.197 [2024-12-07 17:37:21.501410] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.197 [2024-12-07 17:37:21.501418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.197 [2024-12-07 17:37:21.501427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.197 [2024-12-07 17:37:21.501439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.197 [2024-12-07 17:37:21.501445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.197 [2024-12-07 17:37:21.501452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.197 [2024-12-07 17:37:21.501460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.197 [2024-12-07 17:37:21.501469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.197 [2024-12-07 17:37:21.501479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:20:48.197 [2024-12-07 17:37:21.501487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.197 [2024-12-07 17:37:21.515343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.197 [2024-12-07 17:37:21.515389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.197 [2024-12-07 17:37:21.515401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.834 ms 00:20:48.197 [2024-12-07 17:37:21.515410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.197 [2024-12-07 17:37:21.515828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.197 [2024-12-07 17:37:21.515855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.197 [2024-12-07 17:37:21.515866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:20:48.197 [2024-12-07 17:37:21.515874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.197 [2024-12-07 17:37:21.556053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.197 [2024-12-07 17:37:21.556105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.197 [2024-12-07 17:37:21.556118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.197 [2024-12-07 17:37:21.556134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.197 [2024-12-07 17:37:21.556253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.197 [2024-12-07 17:37:21.556266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.197 [2024-12-07 17:37:21.556276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.197 [2024-12-07 17:37:21.556284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.197 [2024-12-07 17:37:21.556339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.198 [2024-12-07 17:37:21.556349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.198 [2024-12-07 17:37:21.556358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.198 [2024-12-07 17:37:21.556368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.198 [2024-12-07 17:37:21.556390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.198 [2024-12-07 17:37:21.556399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.198 [2024-12-07 17:37:21.556410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.198 [2024-12-07 17:37:21.556418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.642378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.642438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.463 [2024-12-07 17:37:21.642452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.642461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.463 [2024-12-07 17:37:21.713082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.463 [2024-12-07 17:37:21.713210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.463 [2024-12-07 17:37:21.713275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.463 [2024-12-07 17:37:21.713424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.463 [2024-12-07 17:37:21.713497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.463 [2024-12-07 17:37:21.713594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.463 [2024-12-07 17:37:21.713673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.463 [2024-12-07 17:37:21.713683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.463 [2024-12-07 17:37:21.713691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.463 [2024-12-07 17:37:21.713849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.495 ms, result 0 00:20:49.410 00:20:49.410 00:20:49.410 17:37:22 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:49.982 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:49.982 17:37:23 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76961 00:20:49.982 17:37:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76961 ']' 00:20:49.982 Process with pid 76961 is not found 00:20:49.982 17:37:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76961 00:20:49.982 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76961) - No such process 00:20:49.982 17:37:23 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76961 is not found' 00:20:49.982 00:20:49.982 real 1m22.896s 00:20:49.982 user 1m38.930s 00:20:49.982 sys 0m15.756s 00:20:49.982 17:37:23 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.982 ************************************ 00:20:49.982 END TEST ftl_trim 00:20:49.982 ************************************ 00:20:49.982 17:37:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:49.982 17:37:23 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:49.982 17:37:23 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:49.982 17:37:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.982 17:37:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:49.982 ************************************ 00:20:49.982 START TEST ftl_restore 00:20:49.982 ************************************ 00:20:49.982 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:49.982 * Looking for test storage... 00:20:49.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:49.982 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:49.982 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:49.982 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.244 17:37:23 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.244 --rc genhtml_branch_coverage=1 00:20:50.244 --rc genhtml_function_coverage=1 00:20:50.244 --rc genhtml_legend=1 00:20:50.244 --rc geninfo_all_blocks=1 00:20:50.244 --rc geninfo_unexecuted_blocks=1 00:20:50.244 00:20:50.244 ' 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.244 --rc genhtml_branch_coverage=1 00:20:50.244 --rc genhtml_function_coverage=1 00:20:50.244 --rc genhtml_legend=1 00:20:50.244 --rc geninfo_all_blocks=1 00:20:50.244 --rc geninfo_unexecuted_blocks=1 00:20:50.244 00:20:50.244 ' 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.244 --rc genhtml_branch_coverage=1 00:20:50.244 --rc genhtml_function_coverage=1 00:20:50.244 --rc genhtml_legend=1 00:20:50.244 --rc geninfo_all_blocks=1 00:20:50.244 --rc geninfo_unexecuted_blocks=1 00:20:50.244 00:20:50.244 ' 00:20:50.244 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.244 --rc genhtml_branch_coverage=1 00:20:50.244 --rc genhtml_function_coverage=1 00:20:50.244 --rc genhtml_legend=1 00:20:50.244 --rc geninfo_all_blocks=1 00:20:50.244 --rc geninfo_unexecuted_blocks=1 00:20:50.244 00:20:50.244 ' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.244 17:37:23 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.GzP8qDEOmf 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77291 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77291 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77291 ']' 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.245 17:37:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:50.245 17:37:23 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:50.245 [2024-12-07 17:37:23.495235] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:20:50.245 [2024-12-07 17:37:23.495378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77291 ] 00:20:50.506 [2024-12-07 17:37:23.657599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.506 [2024-12-07 17:37:23.783689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:51.451 17:37:24 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:51.451 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:51.713 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:51.713 { 00:20:51.713 "name": "nvme0n1", 00:20:51.713 "aliases": [ 00:20:51.713 "47c344c6-8789-416a-9b1d-62587e6e27aa" 00:20:51.713 ], 00:20:51.713 "product_name": "NVMe disk", 00:20:51.713 "block_size": 4096, 00:20:51.713 "num_blocks": 1310720, 00:20:51.713 "uuid": "47c344c6-8789-416a-9b1d-62587e6e27aa", 00:20:51.713 "numa_id": -1, 00:20:51.713 "assigned_rate_limits": { 00:20:51.713 "rw_ios_per_sec": 0, 00:20:51.713 "rw_mbytes_per_sec": 0, 00:20:51.713 "r_mbytes_per_sec": 0, 00:20:51.713 "w_mbytes_per_sec": 0 00:20:51.713 }, 00:20:51.713 "claimed": true, 00:20:51.713 "claim_type": "read_many_write_one", 00:20:51.713 "zoned": false, 00:20:51.713 "supported_io_types": { 00:20:51.713 "read": true, 00:20:51.713 "write": true, 00:20:51.713 "unmap": true, 00:20:51.713 "flush": true, 00:20:51.713 "reset": true, 00:20:51.713 "nvme_admin": true, 00:20:51.713 "nvme_io": true, 00:20:51.713 "nvme_io_md": false, 00:20:51.713 "write_zeroes": true, 00:20:51.713 "zcopy": false, 00:20:51.713 "get_zone_info": false, 00:20:51.713 "zone_management": false, 00:20:51.713 "zone_append": false, 00:20:51.713 "compare": true, 00:20:51.713 "compare_and_write": false, 00:20:51.713 "abort": true, 00:20:51.713 "seek_hole": false, 00:20:51.713 "seek_data": false, 00:20:51.713 "copy": true, 00:20:51.713 "nvme_iov_md": false 00:20:51.713 }, 00:20:51.713 "driver_specific": { 00:20:51.713 "nvme": [ 00:20:51.713 { 00:20:51.713 "pci_address": "0000:00:11.0", 00:20:51.713 "trid": { 00:20:51.713 "trtype": "PCIe", 00:20:51.713 "traddr": "0000:00:11.0" 00:20:51.713 }, 00:20:51.713 "ctrlr_data": { 00:20:51.713 "cntlid": 0, 00:20:51.713 "vendor_id": "0x1b36", 00:20:51.713 "model_number": "QEMU NVMe Ctrl", 00:20:51.713 "serial_number": "12341", 00:20:51.713 "firmware_revision": "8.0.0", 00:20:51.713 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:51.713 "oacs": { 00:20:51.713 "security": 0, 00:20:51.713 "format": 1, 00:20:51.713 "firmware": 0, 00:20:51.713 "ns_manage": 1 00:20:51.713 }, 00:20:51.713 "multi_ctrlr": false, 00:20:51.713 "ana_reporting": false 00:20:51.713 }, 00:20:51.713 "vs": { 00:20:51.713 "nvme_version": "1.4" 00:20:51.713 }, 00:20:51.713 "ns_data": { 00:20:51.713 "id": 1, 00:20:51.713 "can_share": false 00:20:51.713 } 00:20:51.713 } 00:20:51.713 ], 00:20:51.713 "mp_policy": "active_passive" 00:20:51.713 } 00:20:51.713 } 00:20:51.713 ]' 00:20:51.713 17:37:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:51.713 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:51.713 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:51.713 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:51.714 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:51.714 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:51.714 17:37:25 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:51.714 17:37:25 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:51.714 17:37:25 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:51.714 17:37:25 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:51.714 17:37:25 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:51.974 17:37:25 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=23449283-1523-4717-8c78-fe22e335e1a7 00:20:51.974 17:37:25 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:51.974 17:37:25 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23449283-1523-4717-8c78-fe22e335e1a7 00:20:52.234 17:37:25 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:52.495 17:37:25 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=b8f611b1-4343-43ba-a4d3-2231064c2d36 00:20:52.495 17:37:25 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b8f611b1-4343-43ba-a4d3-2231064c2d36 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:52.756 17:37:25 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:52.756 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:52.756 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:52.756 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:52.756 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:52.756 17:37:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:53.016 { 00:20:53.016 "name": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:53.016 "aliases": [ 00:20:53.016 "lvs/nvme0n1p0" 00:20:53.016 ], 00:20:53.016 "product_name": "Logical Volume", 00:20:53.016 "block_size": 4096, 00:20:53.016 "num_blocks": 26476544, 00:20:53.016 "uuid": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:53.016 "assigned_rate_limits": { 00:20:53.016 "rw_ios_per_sec": 0, 00:20:53.016 "rw_mbytes_per_sec": 0, 00:20:53.016 "r_mbytes_per_sec": 0, 00:20:53.016 "w_mbytes_per_sec": 0 00:20:53.016 }, 00:20:53.016 "claimed": false, 00:20:53.016 "zoned": false, 00:20:53.016 "supported_io_types": { 00:20:53.016 "read": true, 00:20:53.016 "write": true, 00:20:53.016 "unmap": true, 00:20:53.016 "flush": false, 00:20:53.016 "reset": true, 00:20:53.016 "nvme_admin": false, 00:20:53.016 "nvme_io": false, 00:20:53.016 "nvme_io_md": false, 00:20:53.016 "write_zeroes": true, 00:20:53.016 "zcopy": false, 00:20:53.016 "get_zone_info": false, 00:20:53.016 "zone_management": false, 00:20:53.016 "zone_append": false, 00:20:53.016 "compare": false, 00:20:53.016 "compare_and_write": false, 00:20:53.016 "abort": false, 00:20:53.016 "seek_hole": true, 00:20:53.016 "seek_data": true, 00:20:53.016 "copy": false, 00:20:53.016 "nvme_iov_md": false 00:20:53.016 }, 00:20:53.016 "driver_specific": { 00:20:53.016 "lvol": { 00:20:53.016 "lvol_store_uuid": "b8f611b1-4343-43ba-a4d3-2231064c2d36", 00:20:53.016 "base_bdev": "nvme0n1", 00:20:53.016 "thin_provision": true, 00:20:53.016 "num_allocated_clusters": 0, 00:20:53.016 "snapshot": false, 00:20:53.016 "clone": false, 00:20:53.016 "esnap_clone": false 00:20:53.016 } 00:20:53.016 } 00:20:53.016 } 00:20:53.016 ]' 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:53.016 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:53.016 17:37:26 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:53.016 17:37:26 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:53.016 17:37:26 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:53.274 17:37:26 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:53.275 17:37:26 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:53.275 17:37:26 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.275 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.275 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:53.275 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:53.275 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:53.275 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:53.533 { 00:20:53.533 "name": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:53.533 "aliases": [ 00:20:53.533 "lvs/nvme0n1p0" 00:20:53.533 ], 00:20:53.533 "product_name": "Logical Volume", 00:20:53.533 "block_size": 4096, 00:20:53.533 "num_blocks": 26476544, 00:20:53.533 "uuid": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:53.533 "assigned_rate_limits": { 00:20:53.533 "rw_ios_per_sec": 0, 00:20:53.533 "rw_mbytes_per_sec": 0, 00:20:53.533 "r_mbytes_per_sec": 0, 00:20:53.533 "w_mbytes_per_sec": 0 00:20:53.533 }, 00:20:53.533 "claimed": false, 00:20:53.533 "zoned": false, 00:20:53.533 "supported_io_types": { 00:20:53.533 "read": true, 00:20:53.533 "write": true, 00:20:53.533 "unmap": true, 00:20:53.533 "flush": false, 00:20:53.533 "reset": true, 00:20:53.533 "nvme_admin": false, 00:20:53.533 "nvme_io": false, 00:20:53.533 "nvme_io_md": false, 00:20:53.533 "write_zeroes": true, 00:20:53.533 "zcopy": false, 00:20:53.533 "get_zone_info": false, 00:20:53.533 "zone_management": false, 00:20:53.533 "zone_append": false, 00:20:53.533 "compare": false, 00:20:53.533 "compare_and_write": false, 00:20:53.533 "abort": false, 00:20:53.533 "seek_hole": true, 00:20:53.533 "seek_data": true, 00:20:53.533 "copy": false, 00:20:53.533 "nvme_iov_md": false 00:20:53.533 }, 00:20:53.533 "driver_specific": { 00:20:53.533 "lvol": { 00:20:53.533 "lvol_store_uuid": "b8f611b1-4343-43ba-a4d3-2231064c2d36", 00:20:53.533 "base_bdev": "nvme0n1", 00:20:53.533 "thin_provision": true, 00:20:53.533 "num_allocated_clusters": 0, 00:20:53.533 "snapshot": false, 00:20:53.533 "clone": false, 00:20:53.533 "esnap_clone": false 00:20:53.533 } 00:20:53.533 } 00:20:53.533 } 00:20:53.533 ]' 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:53.533 17:37:26 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:53.533 17:37:26 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:53.533 17:37:26 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:53.791 17:37:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:53.791 17:37:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.791 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:53.791 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:53.791 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:53.791 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:53.792 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:54.050 { 00:20:54.050 "name": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:54.050 "aliases": [ 00:20:54.050 "lvs/nvme0n1p0" 00:20:54.050 ], 00:20:54.050 "product_name": "Logical Volume", 00:20:54.050 "block_size": 4096, 00:20:54.050 "num_blocks": 26476544, 00:20:54.050 "uuid": "5cec64d5-13b9-4d75-bee0-cb8e0acc8102", 00:20:54.050 "assigned_rate_limits": { 00:20:54.050 "rw_ios_per_sec": 0, 00:20:54.050 "rw_mbytes_per_sec": 0, 00:20:54.050 "r_mbytes_per_sec": 0, 00:20:54.050 "w_mbytes_per_sec": 0 00:20:54.050 }, 00:20:54.050 "claimed": false, 00:20:54.050 "zoned": false, 00:20:54.050 "supported_io_types": { 00:20:54.050 "read": true, 00:20:54.050 "write": true, 00:20:54.050 "unmap": true, 00:20:54.050 "flush": false, 00:20:54.050 "reset": true, 00:20:54.050 "nvme_admin": false, 00:20:54.050 "nvme_io": false, 00:20:54.050 "nvme_io_md": false, 00:20:54.050 "write_zeroes": true, 00:20:54.050 "zcopy": false, 00:20:54.050 "get_zone_info": false, 00:20:54.050 "zone_management": false, 00:20:54.050 "zone_append": false, 00:20:54.050 "compare": false, 00:20:54.050 "compare_and_write": false, 00:20:54.050 "abort": false, 00:20:54.050 "seek_hole": true, 00:20:54.050 "seek_data": true, 00:20:54.050 "copy": false, 00:20:54.050 "nvme_iov_md": false 00:20:54.050 }, 00:20:54.050 "driver_specific": { 00:20:54.050 "lvol": { 00:20:54.050 "lvol_store_uuid": "b8f611b1-4343-43ba-a4d3-2231064c2d36", 00:20:54.050 "base_bdev": "nvme0n1", 00:20:54.050 "thin_provision": true, 00:20:54.050 "num_allocated_clusters": 0, 00:20:54.050 "snapshot": false, 00:20:54.050 "clone": false, 00:20:54.050 "esnap_clone": false 00:20:54.050 } 00:20:54.050 } 00:20:54.050 } 00:20:54.050 ]' 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:54.050 17:37:27 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 --l2p_dram_limit 10' 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:54.050 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:54.050 17:37:27 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5cec64d5-13b9-4d75-bee0-cb8e0acc8102 --l2p_dram_limit 10 -c nvc0n1p0 00:20:54.309 [2024-12-07 17:37:27.461150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.461186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.309 [2024-12-07 17:37:27.461199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.309 [2024-12-07 17:37:27.461205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.461250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.461258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.309 [2024-12-07 17:37:27.461266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:54.309 [2024-12-07 17:37:27.461271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.461291] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.309 [2024-12-07 17:37:27.461857] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.309 [2024-12-07 17:37:27.461879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.461886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.309 [2024-12-07 17:37:27.461894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:20:54.309 [2024-12-07 17:37:27.461899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.461924] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:20:54.309 [2024-12-07 17:37:27.462856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.462881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:54.309 [2024-12-07 17:37:27.462889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:54.309 [2024-12-07 17:37:27.462897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.467610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.467641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.309 [2024-12-07 17:37:27.467649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:20:54.309 [2024-12-07 17:37:27.467656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.467756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.467766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.309 [2024-12-07 17:37:27.467773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:54.309 [2024-12-07 17:37:27.467782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.467815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.467825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.309 [2024-12-07 17:37:27.467834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:54.309 [2024-12-07 17:37:27.467841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.467861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.309 [2024-12-07 17:37:27.470769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.470797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.309 [2024-12-07 17:37:27.470807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:20:54.309 [2024-12-07 17:37:27.470813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.470841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.309 [2024-12-07 17:37:27.470849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.309 [2024-12-07 17:37:27.470856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:54.309 [2024-12-07 17:37:27.470862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.309 [2024-12-07 17:37:27.470876] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:54.310 [2024-12-07 17:37:27.471001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.310 [2024-12-07 17:37:27.471015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.310 [2024-12-07 17:37:27.471024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.310 [2024-12-07 17:37:27.471033] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471041] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471049] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:54.310 [2024-12-07 17:37:27.471055] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.310 [2024-12-07 17:37:27.471065] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.310 [2024-12-07 17:37:27.471071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.310 [2024-12-07 17:37:27.471078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.310 [2024-12-07 17:37:27.471088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.310 [2024-12-07 17:37:27.471095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:20:54.310 [2024-12-07 17:37:27.471101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.310 [2024-12-07 17:37:27.471166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.310 [2024-12-07 17:37:27.471173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.310 [2024-12-07 17:37:27.471181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:54.310 [2024-12-07 17:37:27.471186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.310 [2024-12-07 17:37:27.471264] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.310 [2024-12-07 17:37:27.471272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.310 [2024-12-07 17:37:27.471280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.310 [2024-12-07 17:37:27.471298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.310 [2024-12-07 17:37:27.471316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.310 [2024-12-07 17:37:27.471328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.310 [2024-12-07 17:37:27.471335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:54.310 [2024-12-07 17:37:27.471341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.310 [2024-12-07 17:37:27.471346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.310 [2024-12-07 17:37:27.471353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:54.310 [2024-12-07 17:37:27.471357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.310 [2024-12-07 17:37:27.471370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.310 [2024-12-07 17:37:27.471388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.310 [2024-12-07 17:37:27.471404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.310 [2024-12-07 17:37:27.471421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.310 [2024-12-07 17:37:27.471436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.310 [2024-12-07 17:37:27.471455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.310 [2024-12-07 17:37:27.471466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.310 [2024-12-07 17:37:27.471470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:54.310 [2024-12-07 17:37:27.471477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.310 [2024-12-07 17:37:27.471483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.310 [2024-12-07 17:37:27.471489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:54.310 [2024-12-07 17:37:27.471494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.310 [2024-12-07 17:37:27.471505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:54.310 [2024-12-07 17:37:27.471511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.310 [2024-12-07 17:37:27.471524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.310 [2024-12-07 17:37:27.471530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.310 [2024-12-07 17:37:27.471542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.310 [2024-12-07 17:37:27.471550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.310 [2024-12-07 17:37:27.471555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.310 [2024-12-07 17:37:27.471561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.310 [2024-12-07 17:37:27.471566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.310 [2024-12-07 17:37:27.471572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.310 [2024-12-07 17:37:27.471578] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.310 [2024-12-07 17:37:27.471588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:54.310 [2024-12-07 17:37:27.471601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:54.310 [2024-12-07 17:37:27.471606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:54.310 [2024-12-07 17:37:27.471613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:54.310 [2024-12-07 17:37:27.471618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:54.310 [2024-12-07 17:37:27.471624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:54.310 [2024-12-07 17:37:27.471629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:54.310 [2024-12-07 17:37:27.471637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:54.310 [2024-12-07 17:37:27.471642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:54.310 [2024-12-07 17:37:27.471650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:54.310 [2024-12-07 17:37:27.471679] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.310 [2024-12-07 17:37:27.471686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.310 [2024-12-07 17:37:27.471699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.310 [2024-12-07 17:37:27.471704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.310 [2024-12-07 17:37:27.471711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.310 [2024-12-07 17:37:27.471717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.310 [2024-12-07 17:37:27.471724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.310 [2024-12-07 17:37:27.471737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:20:54.310 [2024-12-07 17:37:27.471744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.310 [2024-12-07 17:37:27.471783] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:54.310 [2024-12-07 17:37:27.471793] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:56.846 [2024-12-07 17:37:30.183380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.846 [2024-12-07 17:37:30.183476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:56.846 [2024-12-07 17:37:30.183495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2711.586 ms 00:20:56.846 [2024-12-07 17:37:30.183508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.846 [2024-12-07 17:37:30.215852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.846 [2024-12-07 17:37:30.215927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.846 [2024-12-07 17:37:30.215944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.093 ms 00:20:56.846 [2024-12-07 17:37:30.215957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.846 [2024-12-07 17:37:30.216119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.846 [2024-12-07 17:37:30.216137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:56.846 [2024-12-07 17:37:30.216147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:56.846 [2024-12-07 17:37:30.216164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.108 [2024-12-07 17:37:30.251831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.108 [2024-12-07 17:37:30.251890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.108 [2024-12-07 17:37:30.251902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.629 ms 00:20:57.108 [2024-12-07 17:37:30.251913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.108 [2024-12-07 17:37:30.251950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.108 [2024-12-07 17:37:30.251968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.108 [2024-12-07 17:37:30.252001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.108 [2024-12-07 17:37:30.252021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.108 [2024-12-07 17:37:30.252626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.108 [2024-12-07 17:37:30.252669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.108 [2024-12-07 17:37:30.252680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:20:57.109 [2024-12-07 17:37:30.252692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.252811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.252823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.109 [2024-12-07 17:37:30.252835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:57.109 [2024-12-07 17:37:30.252849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.270712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.270766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.109 [2024-12-07 17:37:30.270778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.845 ms 00:20:57.109 [2024-12-07 17:37:30.270788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.301467] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:57.109 [2024-12-07 17:37:30.305363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.305410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.109 [2024-12-07 17:37:30.305425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.483 ms 00:20:57.109 [2024-12-07 17:37:30.305434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.403363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.403424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:57.109 [2024-12-07 17:37:30.403442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.877 ms 00:20:57.109 [2024-12-07 17:37:30.403451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.403665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.403682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.109 [2024-12-07 17:37:30.403700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:20:57.109 [2024-12-07 17:37:30.403708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.430197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.430250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:57.109 [2024-12-07 17:37:30.430265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.426 ms 00:20:57.109 [2024-12-07 17:37:30.430274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.455959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.456023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:57.109 [2024-12-07 17:37:30.456040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:20:57.109 [2024-12-07 17:37:30.456048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.109 [2024-12-07 17:37:30.456673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.109 [2024-12-07 17:37:30.456695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.109 [2024-12-07 17:37:30.456707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:20:57.109 [2024-12-07 17:37:30.456718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.543809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.543864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:57.371 [2024-12-07 17:37:30.543883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.010 ms 00:20:57.371 [2024-12-07 17:37:30.543892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.571856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.571909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:57.371 [2024-12-07 17:37:30.571927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.845 ms 00:20:57.371 [2024-12-07 17:37:30.571935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.598172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.598222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:57.371 [2024-12-07 17:37:30.598238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.158 ms 00:20:57.371 [2024-12-07 17:37:30.598246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.624257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.624307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.371 [2024-12-07 17:37:30.624322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.957 ms 00:20:57.371 [2024-12-07 17:37:30.624330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.624389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.624399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.371 [2024-12-07 17:37:30.624414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.371 [2024-12-07 17:37:30.624422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.624516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.371 [2024-12-07 17:37:30.624531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.371 [2024-12-07 17:37:30.624542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:57.371 [2024-12-07 17:37:30.624551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.371 [2024-12-07 17:37:30.625907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3164.204 ms, result 0 00:20:57.371 { 00:20:57.371 "name": "ftl0", 00:20:57.371 "uuid": "246c4a5e-6db5-4225-b39d-2b44d9866ffc" 00:20:57.371 } 00:20:57.371 17:37:30 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:57.371 17:37:30 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:57.633 17:37:30 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:57.633 17:37:30 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:57.895 [2024-12-07 17:37:31.081092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.081161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:57.895 [2024-12-07 17:37:31.081176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:57.895 [2024-12-07 17:37:31.081187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.081213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.895 [2024-12-07 17:37:31.084254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.084299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:57.895 [2024-12-07 17:37:31.084314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:20:57.895 [2024-12-07 17:37:31.084323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.084596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.084620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:57.895 [2024-12-07 17:37:31.084632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:57.895 [2024-12-07 17:37:31.084640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.087902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.087928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:57.895 [2024-12-07 17:37:31.087941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:20:57.895 [2024-12-07 17:37:31.087951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.094167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.094311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:57.895 [2024-12-07 17:37:31.094327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:20:57.895 [2024-12-07 17:37:31.094336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.120605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.120655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:57.895 [2024-12-07 17:37:31.120671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.170 ms 00:20:57.895 [2024-12-07 17:37:31.120679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.138740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.138793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:57.895 [2024-12-07 17:37:31.138809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.999 ms 00:20:57.895 [2024-12-07 17:37:31.138818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.139012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.139028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:57.895 [2024-12-07 17:37:31.139041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:57.895 [2024-12-07 17:37:31.139051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.165290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.165340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:57.895 [2024-12-07 17:37:31.165356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.211 ms 00:20:57.895 [2024-12-07 17:37:31.165364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.190906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.190956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:57.895 [2024-12-07 17:37:31.190970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.485 ms 00:20:57.895 [2024-12-07 17:37:31.190978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.216477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.216524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:57.895 [2024-12-07 17:37:31.216539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.430 ms 00:20:57.895 [2024-12-07 17:37:31.216546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.241572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.895 [2024-12-07 17:37:31.241619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:57.895 [2024-12-07 17:37:31.241634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.911 ms 00:20:57.895 [2024-12-07 17:37:31.241641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.895 [2024-12-07 17:37:31.241692] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:57.895 [2024-12-07 17:37:31.241707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:57.895 [2024-12-07 17:37:31.241779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.241996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:57.896 [2024-12-07 17:37:31.242585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:57.897 [2024-12-07 17:37:31.242668] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:57.897 [2024-12-07 17:37:31.242679] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:20:57.897 [2024-12-07 17:37:31.242689] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:57.897 [2024-12-07 17:37:31.242701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:57.897 [2024-12-07 17:37:31.242713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:57.897 [2024-12-07 17:37:31.242724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:57.897 [2024-12-07 17:37:31.242732] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:57.897 [2024-12-07 17:37:31.242742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:57.897 [2024-12-07 17:37:31.242752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:57.897 [2024-12-07 17:37:31.242760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:57.897 [2024-12-07 17:37:31.242768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:57.897 [2024-12-07 17:37:31.242778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.897 [2024-12-07 17:37:31.242786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:57.897 [2024-12-07 17:37:31.242798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:20:57.897 [2024-12-07 17:37:31.242809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.897 [2024-12-07 17:37:31.256612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.897 [2024-12-07 17:37:31.256655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:57.897 [2024-12-07 17:37:31.256669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.742 ms 00:20:57.897 [2024-12-07 17:37:31.256677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.897 [2024-12-07 17:37:31.257084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.897 [2024-12-07 17:37:31.257101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:57.897 [2024-12-07 17:37:31.257117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:20:57.897 [2024-12-07 17:37:31.257125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.303933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.303994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.157 [2024-12-07 17:37:31.304008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.304017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.304089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.304098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.157 [2024-12-07 17:37:31.304113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.304121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.304223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.304235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.157 [2024-12-07 17:37:31.304247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.304256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.304279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.304288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.157 [2024-12-07 17:37:31.304298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.304309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.389862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.389922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.157 [2024-12-07 17:37:31.389938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.389947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.157 [2024-12-07 17:37:31.460098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.157 [2024-12-07 17:37:31.460231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.157 [2024-12-07 17:37:31.460333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.157 [2024-12-07 17:37:31.460478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:58.157 [2024-12-07 17:37:31.460547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.157 [2024-12-07 17:37:31.460623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.157 [2024-12-07 17:37:31.460695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.157 [2024-12-07 17:37:31.460706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.157 [2024-12-07 17:37:31.460714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.157 [2024-12-07 17:37:31.460863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.733 ms, result 0 00:20:58.157 true 00:20:58.157 17:37:31 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77291 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77291 ']' 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77291 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77291 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.157 killing process with pid 77291 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77291' 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77291 00:20:58.157 17:37:31 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77291 00:21:04.749 17:37:37 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:08.045 262144+0 records in 00:21:08.045 262144+0 records out 00:21:08.045 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.78993 s, 283 MB/s 00:21:08.045 17:37:41 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:10.590 17:37:43 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:10.590 [2024-12-07 17:37:43.666306] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:21:10.590 [2024-12-07 17:37:43.666400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77519 ] 00:21:10.590 [2024-12-07 17:37:43.820752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.590 [2024-12-07 17:37:43.936203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.851 [2024-12-07 17:37:44.230541] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:10.851 [2024-12-07 17:37:44.230624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.113 [2024-12-07 17:37:44.392336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.392404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.113 [2024-12-07 17:37:44.392419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.113 [2024-12-07 17:37:44.392428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.392485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.392498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.113 [2024-12-07 17:37:44.392508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:11.113 [2024-12-07 17:37:44.392516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.392537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.113 [2024-12-07 17:37:44.393252] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.113 [2024-12-07 17:37:44.393283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.393291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.113 [2024-12-07 17:37:44.393301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:21:11.113 [2024-12-07 17:37:44.393309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.395055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:11.113 [2024-12-07 17:37:44.409146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.409195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:11.113 [2024-12-07 17:37:44.409208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.092 ms 00:21:11.113 [2024-12-07 17:37:44.409223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.409301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.409313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:11.113 [2024-12-07 17:37:44.409322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:11.113 [2024-12-07 17:37:44.409330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.417462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.417506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.113 [2024-12-07 17:37:44.417523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.051 ms 00:21:11.113 [2024-12-07 17:37:44.417541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.417624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.417634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.113 [2024-12-07 17:37:44.417642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:11.113 [2024-12-07 17:37:44.417651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.417697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.417707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.113 [2024-12-07 17:37:44.417716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.113 [2024-12-07 17:37:44.417726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.417749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.113 [2024-12-07 17:37:44.421680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.421723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.113 [2024-12-07 17:37:44.421733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.936 ms 00:21:11.113 [2024-12-07 17:37:44.421741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.421781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.421789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.113 [2024-12-07 17:37:44.421798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:11.113 [2024-12-07 17:37:44.421805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.421858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:11.113 [2024-12-07 17:37:44.421885] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:11.113 [2024-12-07 17:37:44.421926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:11.113 [2024-12-07 17:37:44.421943] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:11.113 [2024-12-07 17:37:44.422072] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.113 [2024-12-07 17:37:44.422083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.113 [2024-12-07 17:37:44.422095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.113 [2024-12-07 17:37:44.422106] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.113 [2024-12-07 17:37:44.422115] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.113 [2024-12-07 17:37:44.422124] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.113 [2024-12-07 17:37:44.422131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.113 [2024-12-07 17:37:44.422142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.113 [2024-12-07 17:37:44.422150] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.113 [2024-12-07 17:37:44.422158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.422166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.113 [2024-12-07 17:37:44.422175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:11.113 [2024-12-07 17:37:44.422183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.422266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.113 [2024-12-07 17:37:44.422275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.113 [2024-12-07 17:37:44.422283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:11.113 [2024-12-07 17:37:44.422290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.113 [2024-12-07 17:37:44.422398] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.113 [2024-12-07 17:37:44.422409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.113 [2024-12-07 17:37:44.422418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.114 [2024-12-07 17:37:44.422441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.114 [2024-12-07 17:37:44.422462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.114 [2024-12-07 17:37:44.422476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.114 [2024-12-07 17:37:44.422484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.114 [2024-12-07 17:37:44.422491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.114 [2024-12-07 17:37:44.422504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.114 [2024-12-07 17:37:44.422512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.114 [2024-12-07 17:37:44.422519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.114 [2024-12-07 17:37:44.422534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.114 [2024-12-07 17:37:44.422554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.114 [2024-12-07 17:37:44.422577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.114 [2024-12-07 17:37:44.422597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.114 [2024-12-07 17:37:44.422617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.114 [2024-12-07 17:37:44.422637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.114 [2024-12-07 17:37:44.422650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.114 [2024-12-07 17:37:44.422657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.114 [2024-12-07 17:37:44.422663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.114 [2024-12-07 17:37:44.422670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.114 [2024-12-07 17:37:44.422676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.114 [2024-12-07 17:37:44.422682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.114 [2024-12-07 17:37:44.422694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.114 [2024-12-07 17:37:44.422701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.114 [2024-12-07 17:37:44.422717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.114 [2024-12-07 17:37:44.422724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.114 [2024-12-07 17:37:44.422741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.114 [2024-12-07 17:37:44.422748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.114 [2024-12-07 17:37:44.422754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.114 [2024-12-07 17:37:44.422761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.114 [2024-12-07 17:37:44.422768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.114 [2024-12-07 17:37:44.422775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.114 [2024-12-07 17:37:44.422783] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.114 [2024-12-07 17:37:44.422796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.114 [2024-12-07 17:37:44.422812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.114 [2024-12-07 17:37:44.422819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.114 [2024-12-07 17:37:44.422826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.114 [2024-12-07 17:37:44.422833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.114 [2024-12-07 17:37:44.422840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.114 [2024-12-07 17:37:44.422848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.114 [2024-12-07 17:37:44.422855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.114 [2024-12-07 17:37:44.422862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.114 [2024-12-07 17:37:44.422870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.114 [2024-12-07 17:37:44.422903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.114 [2024-12-07 17:37:44.422911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.114 [2024-12-07 17:37:44.422927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.114 [2024-12-07 17:37:44.422934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.114 [2024-12-07 17:37:44.422940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.114 [2024-12-07 17:37:44.422951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.114 [2024-12-07 17:37:44.422959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.114 [2024-12-07 17:37:44.422967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:21:11.114 [2024-12-07 17:37:44.422975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.114 [2024-12-07 17:37:44.454526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.114 [2024-12-07 17:37:44.454577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.114 [2024-12-07 17:37:44.454592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.487 ms 00:21:11.114 [2024-12-07 17:37:44.454601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.114 [2024-12-07 17:37:44.454694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.114 [2024-12-07 17:37:44.454703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.114 [2024-12-07 17:37:44.454712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:11.114 [2024-12-07 17:37:44.454723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.429 [2024-12-07 17:37:44.501405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.429 [2024-12-07 17:37:44.501460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.429 [2024-12-07 17:37:44.501473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.623 ms 00:21:11.429 [2024-12-07 17:37:44.501482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.429 [2024-12-07 17:37:44.501544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.429 [2024-12-07 17:37:44.501559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.429 [2024-12-07 17:37:44.501569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.429 [2024-12-07 17:37:44.501578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.429 [2024-12-07 17:37:44.502213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.429 [2024-12-07 17:37:44.502254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.429 [2024-12-07 17:37:44.502266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:11.429 [2024-12-07 17:37:44.502274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.502438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.502454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.430 [2024-12-07 17:37:44.502464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:21:11.430 [2024-12-07 17:37:44.502471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.518345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.518394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.430 [2024-12-07 17:37:44.518406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.854 ms 00:21:11.430 [2024-12-07 17:37:44.518414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.532760] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:11.430 [2024-12-07 17:37:44.532808] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:11.430 [2024-12-07 17:37:44.532822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.532831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:11.430 [2024-12-07 17:37:44.532841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.298 ms 00:21:11.430 [2024-12-07 17:37:44.532848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.558903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.558951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:11.430 [2024-12-07 17:37:44.558963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.999 ms 00:21:11.430 [2024-12-07 17:37:44.558971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.571748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.571794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:11.430 [2024-12-07 17:37:44.571805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.708 ms 00:21:11.430 [2024-12-07 17:37:44.571813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.584356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.584402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:11.430 [2024-12-07 17:37:44.584413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:21:11.430 [2024-12-07 17:37:44.584421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.585091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.585126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.430 [2024-12-07 17:37:44.585141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:21:11.430 [2024-12-07 17:37:44.585148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.650268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.650332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:11.430 [2024-12-07 17:37:44.650354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.098 ms 00:21:11.430 [2024-12-07 17:37:44.650364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.661380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:11.430 [2024-12-07 17:37:44.664298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.664344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.430 [2024-12-07 17:37:44.664356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.877 ms 00:21:11.430 [2024-12-07 17:37:44.664364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.664452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.664463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:11.430 [2024-12-07 17:37:44.664472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:11.430 [2024-12-07 17:37:44.664484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.664554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.664567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.430 [2024-12-07 17:37:44.664577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:11.430 [2024-12-07 17:37:44.664585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.664605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.664615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.430 [2024-12-07 17:37:44.664625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:11.430 [2024-12-07 17:37:44.664633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.664672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:11.430 [2024-12-07 17:37:44.664683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.664692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:11.430 [2024-12-07 17:37:44.664700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:11.430 [2024-12-07 17:37:44.664708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.690662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.690716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.430 [2024-12-07 17:37:44.690730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.935 ms 00:21:11.430 [2024-12-07 17:37:44.690744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.690831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.430 [2024-12-07 17:37:44.690842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.430 [2024-12-07 17:37:44.690852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:11.430 [2024-12-07 17:37:44.690860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.430 [2024-12-07 17:37:44.692243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.374 ms, result 0 00:21:12.371  [2024-12-07T17:37:47.132Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-07T17:37:48.065Z] Copying: 44/1024 [MB] (32 MBps) [2024-12-07T17:37:49.008Z] Copying: 73/1024 [MB] (28 MBps) [2024-12-07T17:37:49.947Z] Copying: 90/1024 [MB] (17 MBps) [2024-12-07T17:37:50.892Z] Copying: 114/1024 [MB] (23 MBps) [2024-12-07T17:37:51.837Z] Copying: 134/1024 [MB] (19 MBps) [2024-12-07T17:37:52.930Z] Copying: 151/1024 [MB] (17 MBps) [2024-12-07T17:37:53.872Z] Copying: 171/1024 [MB] (19 MBps) [2024-12-07T17:37:54.824Z] Copying: 190/1024 [MB] (19 MBps) [2024-12-07T17:37:55.766Z] Copying: 208/1024 [MB] (18 MBps) [2024-12-07T17:37:56.710Z] Copying: 227/1024 [MB] (18 MBps) [2024-12-07T17:37:58.094Z] Copying: 257/1024 [MB] (29 MBps) [2024-12-07T17:37:59.038Z] Copying: 276/1024 [MB] (19 MBps) [2024-12-07T17:37:59.983Z] Copying: 295/1024 [MB] (18 MBps) [2024-12-07T17:38:00.927Z] Copying: 311/1024 [MB] (16 MBps) [2024-12-07T17:38:01.872Z] Copying: 327/1024 [MB] (15 MBps) [2024-12-07T17:38:02.817Z] Copying: 343/1024 [MB] (16 MBps) [2024-12-07T17:38:03.760Z] Copying: 365/1024 [MB] (21 MBps) [2024-12-07T17:38:04.706Z] Copying: 390/1024 [MB] (24 MBps) [2024-12-07T17:38:06.094Z] Copying: 410/1024 [MB] (19 MBps) [2024-12-07T17:38:07.035Z] Copying: 424/1024 [MB] (14 MBps) [2024-12-07T17:38:07.979Z] Copying: 446/1024 [MB] (21 MBps) [2024-12-07T17:38:08.924Z] Copying: 462/1024 [MB] (16 MBps) [2024-12-07T17:38:09.868Z] Copying: 476/1024 [MB] (14 MBps) [2024-12-07T17:38:10.810Z] Copying: 491/1024 [MB] (15 MBps) [2024-12-07T17:38:11.757Z] Copying: 508/1024 [MB] (16 MBps) [2024-12-07T17:38:13.149Z] Copying: 519/1024 [MB] (11 MBps) [2024-12-07T17:38:13.724Z] Copying: 535/1024 [MB] (15 MBps) [2024-12-07T17:38:15.114Z] Copying: 547/1024 [MB] (11 MBps) [2024-12-07T17:38:16.056Z] Copying: 565/1024 [MB] (18 MBps) [2024-12-07T17:38:17.002Z] Copying: 575/1024 [MB] (10 MBps) [2024-12-07T17:38:17.947Z] Copying: 585/1024 [MB] (10 MBps) [2024-12-07T17:38:18.888Z] Copying: 596/1024 [MB] (10 MBps) [2024-12-07T17:38:19.833Z] Copying: 606/1024 [MB] (10 MBps) [2024-12-07T17:38:20.778Z] Copying: 620/1024 [MB] (13 MBps) [2024-12-07T17:38:21.723Z] Copying: 637/1024 [MB] (16 MBps) [2024-12-07T17:38:23.112Z] Copying: 653/1024 [MB] (16 MBps) [2024-12-07T17:38:24.057Z] Copying: 669/1024 [MB] (15 MBps) [2024-12-07T17:38:24.716Z] Copying: 685/1024 [MB] (16 MBps) [2024-12-07T17:38:26.106Z] Copying: 702/1024 [MB] (16 MBps) [2024-12-07T17:38:27.048Z] Copying: 720/1024 [MB] (18 MBps) [2024-12-07T17:38:27.995Z] Copying: 738/1024 [MB] (18 MBps) [2024-12-07T17:38:28.940Z] Copying: 758/1024 [MB] (19 MBps) [2024-12-07T17:38:29.882Z] Copying: 773/1024 [MB] (15 MBps) [2024-12-07T17:38:30.830Z] Copying: 786/1024 [MB] (12 MBps) [2024-12-07T17:38:31.778Z] Copying: 808/1024 [MB] (21 MBps) [2024-12-07T17:38:32.726Z] Copying: 818/1024 [MB] (10 MBps) [2024-12-07T17:38:34.101Z] Copying: 830/1024 [MB] (12 MBps) [2024-12-07T17:38:35.034Z] Copying: 865/1024 [MB] (35 MBps) [2024-12-07T17:38:35.967Z] Copying: 900/1024 [MB] (34 MBps) [2024-12-07T17:38:36.903Z] Copying: 935/1024 [MB] (35 MBps) [2024-12-07T17:38:37.836Z] Copying: 959/1024 [MB] (24 MBps) [2024-12-07T17:38:38.777Z] Copying: 990/1024 [MB] (30 MBps) [2024-12-07T17:38:39.349Z] Copying: 1012/1024 [MB] (21 MBps) [2024-12-07T17:38:39.349Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-07 17:38:39.300036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.967 [2024-12-07 17:38:39.300071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.967 [2024-12-07 17:38:39.300082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:05.967 [2024-12-07 17:38:39.300089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.967 [2024-12-07 17:38:39.300105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.967 [2024-12-07 17:38:39.302273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.967 [2024-12-07 17:38:39.302295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.967 [2024-12-07 17:38:39.302308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:22:05.967 [2024-12-07 17:38:39.302314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.967 [2024-12-07 17:38:39.303750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.967 [2024-12-07 17:38:39.303778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.967 [2024-12-07 17:38:39.303786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:22:05.967 [2024-12-07 17:38:39.303792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.967 [2024-12-07 17:38:39.316287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.967 [2024-12-07 17:38:39.316314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.967 [2024-12-07 17:38:39.316322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.484 ms 00:22:05.967 [2024-12-07 17:38:39.316333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.967 [2024-12-07 17:38:39.321095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.967 [2024-12-07 17:38:39.321119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.967 [2024-12-07 17:38:39.321127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:22:05.967 [2024-12-07 17:38:39.321134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.967 [2024-12-07 17:38:39.339255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.968 [2024-12-07 17:38:39.339284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.968 [2024-12-07 17:38:39.339293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.092 ms 00:22:05.968 [2024-12-07 17:38:39.339299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.350804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.350831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:06.229 [2024-12-07 17:38:39.350840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.479 ms 00:22:06.229 [2024-12-07 17:38:39.350847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.350946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.350955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:06.229 [2024-12-07 17:38:39.350961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:06.229 [2024-12-07 17:38:39.350967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.368739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.368765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:06.229 [2024-12-07 17:38:39.368772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:22:06.229 [2024-12-07 17:38:39.368778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.386200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.386226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:06.229 [2024-12-07 17:38:39.386233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.398 ms 00:22:06.229 [2024-12-07 17:38:39.386238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.403536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.403562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:06.229 [2024-12-07 17:38:39.403569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.273 ms 00:22:06.229 [2024-12-07 17:38:39.403575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.420633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.229 [2024-12-07 17:38:39.420658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:06.229 [2024-12-07 17:38:39.420666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:22:06.229 [2024-12-07 17:38:39.420671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.229 [2024-12-07 17:38:39.420695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:06.229 [2024-12-07 17:38:39.420709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.420996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.421002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.421007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.421013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:06.229 [2024-12-07 17:38:39.421019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:06.230 [2024-12-07 17:38:39.421290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:06.230 [2024-12-07 17:38:39.421296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:22:06.230 [2024-12-07 17:38:39.421302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:06.230 [2024-12-07 17:38:39.421307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:06.230 [2024-12-07 17:38:39.421312] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:06.230 [2024-12-07 17:38:39.421318] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:06.230 [2024-12-07 17:38:39.421323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:06.230 [2024-12-07 17:38:39.421333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:06.230 [2024-12-07 17:38:39.421338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:06.230 [2024-12-07 17:38:39.421343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:06.230 [2024-12-07 17:38:39.421347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:06.230 [2024-12-07 17:38:39.421352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.230 [2024-12-07 17:38:39.421358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:06.230 [2024-12-07 17:38:39.421364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:22:06.230 [2024-12-07 17:38:39.421371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.430733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.230 [2024-12-07 17:38:39.430754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:06.230 [2024-12-07 17:38:39.430762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.349 ms 00:22:06.230 [2024-12-07 17:38:39.430769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.431044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.230 [2024-12-07 17:38:39.431060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:06.230 [2024-12-07 17:38:39.431071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:22:06.230 [2024-12-07 17:38:39.431076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.456702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.456729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.230 [2024-12-07 17:38:39.456737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.456743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.456780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.456786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.230 [2024-12-07 17:38:39.456794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.456800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.456839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.456847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.230 [2024-12-07 17:38:39.456852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.456858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.456869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.456875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.230 [2024-12-07 17:38:39.456880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.456888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.517565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.517602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.230 [2024-12-07 17:38:39.517612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.517618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.566517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.566550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.230 [2024-12-07 17:38:39.566559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.566570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.566625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.566633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:06.230 [2024-12-07 17:38:39.566639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.566645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.230 [2024-12-07 17:38:39.566670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.230 [2024-12-07 17:38:39.566678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:06.230 [2024-12-07 17:38:39.566684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.230 [2024-12-07 17:38:39.566689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.231 [2024-12-07 17:38:39.566757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.231 [2024-12-07 17:38:39.566765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:06.231 [2024-12-07 17:38:39.566771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.231 [2024-12-07 17:38:39.566777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.231 [2024-12-07 17:38:39.566800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.231 [2024-12-07 17:38:39.566807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:06.231 [2024-12-07 17:38:39.566812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.231 [2024-12-07 17:38:39.566818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.231 [2024-12-07 17:38:39.566854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.231 [2024-12-07 17:38:39.566861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:06.231 [2024-12-07 17:38:39.566866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.231 [2024-12-07 17:38:39.566872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.231 [2024-12-07 17:38:39.566902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:06.231 [2024-12-07 17:38:39.566910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:06.231 [2024-12-07 17:38:39.566915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:06.231 [2024-12-07 17:38:39.566922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.231 [2024-12-07 17:38:39.567031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.953 ms, result 0 00:22:07.171 00:22:07.171 00:22:07.171 17:38:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:07.431 [2024-12-07 17:38:40.614768] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:22:07.431 [2024-12-07 17:38:40.614886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78112 ] 00:22:07.431 [2024-12-07 17:38:40.770751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.689 [2024-12-07 17:38:40.854536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.689 [2024-12-07 17:38:41.063021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.689 [2024-12-07 17:38:41.063067] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.947 [2024-12-07 17:38:41.210358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.210392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.947 [2024-12-07 17:38:41.210403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.947 [2024-12-07 17:38:41.210409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.210442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.210452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.947 [2024-12-07 17:38:41.210458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:07.947 [2024-12-07 17:38:41.210463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.210476] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.947 [2024-12-07 17:38:41.211016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.947 [2024-12-07 17:38:41.211035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.211041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.947 [2024-12-07 17:38:41.211047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:22:07.947 [2024-12-07 17:38:41.211053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.212003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.947 [2024-12-07 17:38:41.221470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.221494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.947 [2024-12-07 17:38:41.221503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.468 ms 00:22:07.947 [2024-12-07 17:38:41.221515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.221560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.221567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.947 [2024-12-07 17:38:41.221573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:07.947 [2024-12-07 17:38:41.221579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.225911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.225932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.947 [2024-12-07 17:38:41.225939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.296 ms 00:22:07.947 [2024-12-07 17:38:41.225948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.226013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.226020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.947 [2024-12-07 17:38:41.226026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:07.947 [2024-12-07 17:38:41.226032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.226070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.226078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.947 [2024-12-07 17:38:41.226084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.947 [2024-12-07 17:38:41.226090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.226107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.947 [2024-12-07 17:38:41.228750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.947 [2024-12-07 17:38:41.228769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.947 [2024-12-07 17:38:41.228778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.646 ms 00:22:07.947 [2024-12-07 17:38:41.228783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.947 [2024-12-07 17:38:41.228808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.948 [2024-12-07 17:38:41.228815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.948 [2024-12-07 17:38:41.228821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:07.948 [2024-12-07 17:38:41.228826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.948 [2024-12-07 17:38:41.228840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.948 [2024-12-07 17:38:41.228855] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:07.948 [2024-12-07 17:38:41.228880] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.948 [2024-12-07 17:38:41.228893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:07.948 [2024-12-07 17:38:41.228970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.948 [2024-12-07 17:38:41.228978] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.948 [2024-12-07 17:38:41.228995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:07.948 [2024-12-07 17:38:41.229002] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229013] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229019] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:07.948 [2024-12-07 17:38:41.229025] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.948 [2024-12-07 17:38:41.229032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.948 [2024-12-07 17:38:41.229038] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.948 [2024-12-07 17:38:41.229043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.948 [2024-12-07 17:38:41.229049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.948 [2024-12-07 17:38:41.229055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:22:07.948 [2024-12-07 17:38:41.229060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.948 [2024-12-07 17:38:41.229123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.948 [2024-12-07 17:38:41.229129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.948 [2024-12-07 17:38:41.229134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:07.948 [2024-12-07 17:38:41.229139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.948 [2024-12-07 17:38:41.229214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.948 [2024-12-07 17:38:41.229227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.948 [2024-12-07 17:38:41.229233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.948 [2024-12-07 17:38:41.229252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.948 [2024-12-07 17:38:41.229268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.948 [2024-12-07 17:38:41.229278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.948 [2024-12-07 17:38:41.229283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:07.948 [2024-12-07 17:38:41.229288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.948 [2024-12-07 17:38:41.229297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.948 [2024-12-07 17:38:41.229302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:07.948 [2024-12-07 17:38:41.229307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.948 [2024-12-07 17:38:41.229317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.948 [2024-12-07 17:38:41.229333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.948 [2024-12-07 17:38:41.229348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.948 [2024-12-07 17:38:41.229363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.948 [2024-12-07 17:38:41.229378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.948 [2024-12-07 17:38:41.229393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.948 [2024-12-07 17:38:41.229403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.948 [2024-12-07 17:38:41.229408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:07.948 [2024-12-07 17:38:41.229413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.948 [2024-12-07 17:38:41.229420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.948 [2024-12-07 17:38:41.229425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:07.948 [2024-12-07 17:38:41.229430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.948 [2024-12-07 17:38:41.229440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:07.948 [2024-12-07 17:38:41.229445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.948 [2024-12-07 17:38:41.229455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.948 [2024-12-07 17:38:41.229461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.948 [2024-12-07 17:38:41.229472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.948 [2024-12-07 17:38:41.229478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.948 [2024-12-07 17:38:41.229483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.948 [2024-12-07 17:38:41.229488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.948 [2024-12-07 17:38:41.229493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.948 [2024-12-07 17:38:41.229498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.948 [2024-12-07 17:38:41.229504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.948 [2024-12-07 17:38:41.229520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:07.948 [2024-12-07 17:38:41.229536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:07.948 [2024-12-07 17:38:41.229541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:07.948 [2024-12-07 17:38:41.229547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:07.948 [2024-12-07 17:38:41.229552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:07.948 [2024-12-07 17:38:41.229557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:07.948 [2024-12-07 17:38:41.229563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:07.948 [2024-12-07 17:38:41.229568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:07.948 [2024-12-07 17:38:41.229574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:07.948 [2024-12-07 17:38:41.229580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:07.948 [2024-12-07 17:38:41.229609] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.948 [2024-12-07 17:38:41.229615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.948 [2024-12-07 17:38:41.229627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.949 [2024-12-07 17:38:41.229632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.949 [2024-12-07 17:38:41.229638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.949 [2024-12-07 17:38:41.229643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.229649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.949 [2024-12-07 17:38:41.229654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:22:07.949 [2024-12-07 17:38:41.229660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.250494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.250605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.949 [2024-12-07 17:38:41.250650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.803 ms 00:22:07.949 [2024-12-07 17:38:41.250672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.250747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.250764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.949 [2024-12-07 17:38:41.250779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:07.949 [2024-12-07 17:38:41.250794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.288255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.288367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.949 [2024-12-07 17:38:41.288413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.411 ms 00:22:07.949 [2024-12-07 17:38:41.288431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.288472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.288490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.949 [2024-12-07 17:38:41.288509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:07.949 [2024-12-07 17:38:41.288523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.288832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.288864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.949 [2024-12-07 17:38:41.288880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:07.949 [2024-12-07 17:38:41.288895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.289016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.289036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.949 [2024-12-07 17:38:41.289052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:07.949 [2024-12-07 17:38:41.289071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.299552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.299638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.949 [2024-12-07 17:38:41.299680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.420 ms 00:22:07.949 [2024-12-07 17:38:41.299696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.949 [2024-12-07 17:38:41.309389] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.949 [2024-12-07 17:38:41.309490] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.949 [2024-12-07 17:38:41.309548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.949 [2024-12-07 17:38:41.309564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.949 [2024-12-07 17:38:41.309579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.773 ms 00:22:07.949 [2024-12-07 17:38:41.309593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.207 [2024-12-07 17:38:41.328065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.207 [2024-12-07 17:38:41.328154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:08.207 [2024-12-07 17:38:41.328193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.438 ms 00:22:08.207 [2024-12-07 17:38:41.328210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.207 [2024-12-07 17:38:41.337029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.207 [2024-12-07 17:38:41.337112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:08.207 [2024-12-07 17:38:41.337151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.775 ms 00:22:08.207 [2024-12-07 17:38:41.337167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.207 [2024-12-07 17:38:41.345612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.207 [2024-12-07 17:38:41.345693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:08.207 [2024-12-07 17:38:41.345730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.415 ms 00:22:08.207 [2024-12-07 17:38:41.345746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.207 [2024-12-07 17:38:41.346205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.207 [2024-12-07 17:38:41.346277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.207 [2024-12-07 17:38:41.346324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:22:08.207 [2024-12-07 17:38:41.346340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.207 [2024-12-07 17:38:41.390722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.390845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.208 [2024-12-07 17:38:41.390911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.357 ms 00:22:08.208 [2024-12-07 17:38:41.390931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.399029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:08.208 [2024-12-07 17:38:41.400952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.401054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.208 [2024-12-07 17:38:41.401102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.759 ms 00:22:08.208 [2024-12-07 17:38:41.401121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.401200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.401460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.208 [2024-12-07 17:38:41.401484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:08.208 [2024-12-07 17:38:41.401491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.401573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.401583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.208 [2024-12-07 17:38:41.401590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:08.208 [2024-12-07 17:38:41.401596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.401614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.401620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.208 [2024-12-07 17:38:41.401627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.208 [2024-12-07 17:38:41.401633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.401658] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.208 [2024-12-07 17:38:41.401666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.401672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.208 [2024-12-07 17:38:41.401678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:08.208 [2024-12-07 17:38:41.401684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.419662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.419763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.208 [2024-12-07 17:38:41.419810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.962 ms 00:22:08.208 [2024-12-07 17:38:41.419828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.419919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.208 [2024-12-07 17:38:41.419954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.208 [2024-12-07 17:38:41.419970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:08.208 [2024-12-07 17:38:41.419995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.208 [2024-12-07 17:38:41.420718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 210.029 ms, result 0 00:22:09.585  [2024-12-07T17:38:43.910Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-07T17:38:44.856Z] Copying: 35/1024 [MB] (21 MBps) [2024-12-07T17:38:45.798Z] Copying: 54/1024 [MB] (18 MBps) [2024-12-07T17:38:46.742Z] Copying: 75/1024 [MB] (21 MBps) [2024-12-07T17:38:47.682Z] Copying: 97/1024 [MB] (21 MBps) [2024-12-07T17:38:48.668Z] Copying: 119/1024 [MB] (22 MBps) [2024-12-07T17:38:49.608Z] Copying: 132/1024 [MB] (12 MBps) [2024-12-07T17:38:50.993Z] Copying: 158/1024 [MB] (25 MBps) [2024-12-07T17:38:51.565Z] Copying: 177/1024 [MB] (19 MBps) [2024-12-07T17:38:52.951Z] Copying: 194/1024 [MB] (16 MBps) [2024-12-07T17:38:53.897Z] Copying: 205/1024 [MB] (11 MBps) [2024-12-07T17:38:54.836Z] Copying: 216/1024 [MB] (10 MBps) [2024-12-07T17:38:55.829Z] Copying: 236/1024 [MB] (20 MBps) [2024-12-07T17:38:56.770Z] Copying: 264/1024 [MB] (27 MBps) [2024-12-07T17:38:57.713Z] Copying: 279/1024 [MB] (15 MBps) [2024-12-07T17:38:58.653Z] Copying: 295/1024 [MB] (16 MBps) [2024-12-07T17:38:59.594Z] Copying: 318/1024 [MB] (22 MBps) [2024-12-07T17:39:00.979Z] Copying: 337/1024 [MB] (19 MBps) [2024-12-07T17:39:01.925Z] Copying: 357/1024 [MB] (19 MBps) [2024-12-07T17:39:02.872Z] Copying: 379/1024 [MB] (22 MBps) [2024-12-07T17:39:03.818Z] Copying: 396/1024 [MB] (17 MBps) [2024-12-07T17:39:04.765Z] Copying: 412/1024 [MB] (15 MBps) [2024-12-07T17:39:05.708Z] Copying: 423/1024 [MB] (10 MBps) [2024-12-07T17:39:06.649Z] Copying: 434/1024 [MB] (11 MBps) [2024-12-07T17:39:07.600Z] Copying: 453/1024 [MB] (19 MBps) [2024-12-07T17:39:08.990Z] Copying: 472/1024 [MB] (18 MBps) [2024-12-07T17:39:09.563Z] Copying: 491/1024 [MB] (19 MBps) [2024-12-07T17:39:10.960Z] Copying: 504/1024 [MB] (12 MBps) [2024-12-07T17:39:11.904Z] Copying: 526/1024 [MB] (22 MBps) [2024-12-07T17:39:12.846Z] Copying: 541/1024 [MB] (15 MBps) [2024-12-07T17:39:13.788Z] Copying: 560/1024 [MB] (18 MBps) [2024-12-07T17:39:14.731Z] Copying: 584/1024 [MB] (24 MBps) [2024-12-07T17:39:15.674Z] Copying: 600/1024 [MB] (15 MBps) [2024-12-07T17:39:16.617Z] Copying: 619/1024 [MB] (18 MBps) [2024-12-07T17:39:17.563Z] Copying: 639/1024 [MB] (19 MBps) [2024-12-07T17:39:18.947Z] Copying: 660/1024 [MB] (20 MBps) [2024-12-07T17:39:19.893Z] Copying: 684/1024 [MB] (24 MBps) [2024-12-07T17:39:20.835Z] Copying: 705/1024 [MB] (20 MBps) [2024-12-07T17:39:21.776Z] Copying: 725/1024 [MB] (20 MBps) [2024-12-07T17:39:22.723Z] Copying: 748/1024 [MB] (23 MBps) [2024-12-07T17:39:23.669Z] Copying: 767/1024 [MB] (18 MBps) [2024-12-07T17:39:24.614Z] Copying: 777/1024 [MB] (10 MBps) [2024-12-07T17:39:25.560Z] Copying: 788/1024 [MB] (10 MBps) [2024-12-07T17:39:26.947Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-07T17:39:27.944Z] Copying: 812/1024 [MB] (13 MBps) [2024-12-07T17:39:28.886Z] Copying: 825/1024 [MB] (12 MBps) [2024-12-07T17:39:29.830Z] Copying: 836/1024 [MB] (10 MBps) [2024-12-07T17:39:30.773Z] Copying: 846/1024 [MB] (10 MBps) [2024-12-07T17:39:31.716Z] Copying: 857/1024 [MB] (10 MBps) [2024-12-07T17:39:32.662Z] Copying: 867/1024 [MB] (10 MBps) [2024-12-07T17:39:33.607Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-07T17:39:34.996Z] Copying: 895/1024 [MB] (16 MBps) [2024-12-07T17:39:35.574Z] Copying: 911/1024 [MB] (16 MBps) [2024-12-07T17:39:36.953Z] Copying: 930/1024 [MB] (19 MBps) [2024-12-07T17:39:37.887Z] Copying: 955/1024 [MB] (24 MBps) [2024-12-07T17:39:38.827Z] Copying: 973/1024 [MB] (18 MBps) [2024-12-07T17:39:39.767Z] Copying: 995/1024 [MB] (21 MBps) [2024-12-07T17:39:40.027Z] Copying: 1015/1024 [MB] (20 MBps) [2024-12-07T17:39:40.286Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-07 17:39:40.202641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.904 [2024-12-07 17:39:40.202748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:06.904 [2024-12-07 17:39:40.202771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:06.904 [2024-12-07 17:39:40.202784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.904 [2024-12-07 17:39:40.202820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:06.904 [2024-12-07 17:39:40.208077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.904 [2024-12-07 17:39:40.208143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:06.904 [2024-12-07 17:39:40.208160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.231 ms 00:23:06.904 [2024-12-07 17:39:40.208173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.904 [2024-12-07 17:39:40.208537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.904 [2024-12-07 17:39:40.208553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:06.904 [2024-12-07 17:39:40.208567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:06.904 [2024-12-07 17:39:40.208580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.904 [2024-12-07 17:39:40.214039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.904 [2024-12-07 17:39:40.214065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:06.904 [2024-12-07 17:39:40.214075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.437 ms 00:23:06.904 [2024-12-07 17:39:40.214089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.904 [2024-12-07 17:39:40.220250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.904 [2024-12-07 17:39:40.220288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:06.904 [2024-12-07 17:39:40.220299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:23:06.904 [2024-12-07 17:39:40.220307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.904 [2024-12-07 17:39:40.246771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.905 [2024-12-07 17:39:40.246832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:06.905 [2024-12-07 17:39:40.246846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.390 ms 00:23:06.905 [2024-12-07 17:39:40.246853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.905 [2024-12-07 17:39:40.263680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.905 [2024-12-07 17:39:40.263725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:06.905 [2024-12-07 17:39:40.263737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.777 ms 00:23:06.905 [2024-12-07 17:39:40.263746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.905 [2024-12-07 17:39:40.263902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.905 [2024-12-07 17:39:40.263914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:06.905 [2024-12-07 17:39:40.263924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:06.905 [2024-12-07 17:39:40.263932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.164 [2024-12-07 17:39:40.290165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.164 [2024-12-07 17:39:40.290367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:07.164 [2024-12-07 17:39:40.290388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.216 ms 00:23:07.164 [2024-12-07 17:39:40.290396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.164 [2024-12-07 17:39:40.315706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.164 [2024-12-07 17:39:40.315752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:07.164 [2024-12-07 17:39:40.315766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.187 ms 00:23:07.164 [2024-12-07 17:39:40.315773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.164 [2024-12-07 17:39:40.340068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.164 [2024-12-07 17:39:40.340114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.164 [2024-12-07 17:39:40.340127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.249 ms 00:23:07.164 [2024-12-07 17:39:40.340134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.164 [2024-12-07 17:39:40.364752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.164 [2024-12-07 17:39:40.364796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.164 [2024-12-07 17:39:40.364808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.545 ms 00:23:07.164 [2024-12-07 17:39:40.364816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.164 [2024-12-07 17:39:40.364859] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.164 [2024-12-07 17:39:40.364881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.364978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.164 [2024-12-07 17:39:40.365150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.165 [2024-12-07 17:39:40.365767] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.165 [2024-12-07 17:39:40.365774] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:23:07.165 [2024-12-07 17:39:40.365782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.165 [2024-12-07 17:39:40.365790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.165 [2024-12-07 17:39:40.365798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.165 [2024-12-07 17:39:40.365807] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.165 [2024-12-07 17:39:40.365822] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.165 [2024-12-07 17:39:40.365830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.165 [2024-12-07 17:39:40.365838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.165 [2024-12-07 17:39:40.365845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.165 [2024-12-07 17:39:40.365852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.165 [2024-12-07 17:39:40.365860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.165 [2024-12-07 17:39:40.365868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.165 [2024-12-07 17:39:40.365878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:23:07.165 [2024-12-07 17:39:40.365888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.165 [2024-12-07 17:39:40.379250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.165 [2024-12-07 17:39:40.379291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.165 [2024-12-07 17:39:40.379303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.342 ms 00:23:07.165 [2024-12-07 17:39:40.379311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.165 [2024-12-07 17:39:40.379705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.165 [2024-12-07 17:39:40.379715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.166 [2024-12-07 17:39:40.379731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:23:07.166 [2024-12-07 17:39:40.379738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.166 [2024-12-07 17:39:40.415999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.166 [2024-12-07 17:39:40.416045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.166 [2024-12-07 17:39:40.416057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.166 [2024-12-07 17:39:40.416067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.166 [2024-12-07 17:39:40.416132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.166 [2024-12-07 17:39:40.416143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.166 [2024-12-07 17:39:40.416156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.166 [2024-12-07 17:39:40.416166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.166 [2024-12-07 17:39:40.416248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.166 [2024-12-07 17:39:40.416258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.166 [2024-12-07 17:39:40.416268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.166 [2024-12-07 17:39:40.416277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.166 [2024-12-07 17:39:40.416294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.166 [2024-12-07 17:39:40.416303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.166 [2024-12-07 17:39:40.416313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.166 [2024-12-07 17:39:40.416324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.166 [2024-12-07 17:39:40.499699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.166 [2024-12-07 17:39:40.500005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.166 [2024-12-07 17:39:40.500030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.166 [2024-12-07 17:39:40.500039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.568920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.568977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.426 [2024-12-07 17:39:40.569012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.426 [2024-12-07 17:39:40.569103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.426 [2024-12-07 17:39:40.569190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.426 [2024-12-07 17:39:40.569318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.426 [2024-12-07 17:39:40.569382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.426 [2024-12-07 17:39:40.569454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.426 [2024-12-07 17:39:40.569532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.426 [2024-12-07 17:39:40.569541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.426 [2024-12-07 17:39:40.569549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.426 [2024-12-07 17:39:40.569685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.020 ms, result 0 00:23:07.997 00:23:07.997 00:23:07.997 17:39:41 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:10.544 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:10.544 17:39:43 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:10.544 [2024-12-07 17:39:43.577633] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:23:10.544 [2024-12-07 17:39:43.577747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78752 ] 00:23:10.544 [2024-12-07 17:39:43.731808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.544 [2024-12-07 17:39:43.834423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.805 [2024-12-07 17:39:44.130592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.805 [2024-12-07 17:39:44.130677] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:11.068 [2024-12-07 17:39:44.293415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.068 [2024-12-07 17:39:44.293676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:11.068 [2024-12-07 17:39:44.293702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:11.068 [2024-12-07 17:39:44.293712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.068 [2024-12-07 17:39:44.293785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.068 [2024-12-07 17:39:44.293800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.068 [2024-12-07 17:39:44.293811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:11.068 [2024-12-07 17:39:44.293819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.068 [2024-12-07 17:39:44.293842] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:11.068 [2024-12-07 17:39:44.294617] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:11.068 [2024-12-07 17:39:44.294647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.068 [2024-12-07 17:39:44.294655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.068 [2024-12-07 17:39:44.294666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:23:11.068 [2024-12-07 17:39:44.294675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.068 [2024-12-07 17:39:44.296470] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:11.068 [2024-12-07 17:39:44.310659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.068 [2024-12-07 17:39:44.310709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:11.068 [2024-12-07 17:39:44.310723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.191 ms 00:23:11.068 [2024-12-07 17:39:44.310732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.068 [2024-12-07 17:39:44.310818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.068 [2024-12-07 17:39:44.310829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:11.069 [2024-12-07 17:39:44.310839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:11.069 [2024-12-07 17:39:44.310847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.319159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.319201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.069 [2024-12-07 17:39:44.319212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.231 ms 00:23:11.069 [2024-12-07 17:39:44.319228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.319309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.319319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.069 [2024-12-07 17:39:44.319328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:11.069 [2024-12-07 17:39:44.319337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.319382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.319393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:11.069 [2024-12-07 17:39:44.319401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:11.069 [2024-12-07 17:39:44.319410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.319438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:11.069 [2024-12-07 17:39:44.323396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.323435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.069 [2024-12-07 17:39:44.323449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.963 ms 00:23:11.069 [2024-12-07 17:39:44.323458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.323497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.323507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:11.069 [2024-12-07 17:39:44.323516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:11.069 [2024-12-07 17:39:44.323524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.323576] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:11.069 [2024-12-07 17:39:44.323602] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:11.069 [2024-12-07 17:39:44.323641] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:11.069 [2024-12-07 17:39:44.323660] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:11.069 [2024-12-07 17:39:44.323767] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:11.069 [2024-12-07 17:39:44.323780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:11.069 [2024-12-07 17:39:44.323791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:11.069 [2024-12-07 17:39:44.323801] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:11.069 [2024-12-07 17:39:44.323811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:11.069 [2024-12-07 17:39:44.323820] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:11.069 [2024-12-07 17:39:44.323828] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:11.069 [2024-12-07 17:39:44.323839] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:11.069 [2024-12-07 17:39:44.323848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:11.069 [2024-12-07 17:39:44.323856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.323864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:11.069 [2024-12-07 17:39:44.323872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:23:11.069 [2024-12-07 17:39:44.323879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.323963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.069 [2024-12-07 17:39:44.323973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:11.069 [2024-12-07 17:39:44.324004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:11.069 [2024-12-07 17:39:44.324013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.069 [2024-12-07 17:39:44.324121] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:11.069 [2024-12-07 17:39:44.324133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:11.069 [2024-12-07 17:39:44.324142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:11.069 [2024-12-07 17:39:44.324166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:11.069 [2024-12-07 17:39:44.324188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.069 [2024-12-07 17:39:44.324203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:11.069 [2024-12-07 17:39:44.324210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:11.069 [2024-12-07 17:39:44.324217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:11.069 [2024-12-07 17:39:44.324233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:11.069 [2024-12-07 17:39:44.324241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:11.069 [2024-12-07 17:39:44.324247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:11.069 [2024-12-07 17:39:44.324262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:11.069 [2024-12-07 17:39:44.324283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:11.069 [2024-12-07 17:39:44.324304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:11.069 [2024-12-07 17:39:44.324325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:11.069 [2024-12-07 17:39:44.324346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:11.069 [2024-12-07 17:39:44.324362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:11.069 [2024-12-07 17:39:44.324368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:11.069 [2024-12-07 17:39:44.324374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.069 [2024-12-07 17:39:44.324381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:11.070 [2024-12-07 17:39:44.324388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:11.070 [2024-12-07 17:39:44.324394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:11.070 [2024-12-07 17:39:44.324401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:11.070 [2024-12-07 17:39:44.324408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:11.070 [2024-12-07 17:39:44.324415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.070 [2024-12-07 17:39:44.324422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:11.070 [2024-12-07 17:39:44.324428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:11.070 [2024-12-07 17:39:44.324435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.070 [2024-12-07 17:39:44.324441] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:11.070 [2024-12-07 17:39:44.324450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:11.070 [2024-12-07 17:39:44.324458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:11.070 [2024-12-07 17:39:44.324467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:11.070 [2024-12-07 17:39:44.324475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:11.070 [2024-12-07 17:39:44.324483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:11.070 [2024-12-07 17:39:44.324489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:11.070 [2024-12-07 17:39:44.324496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:11.070 [2024-12-07 17:39:44.324503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:11.070 [2024-12-07 17:39:44.324510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:11.070 [2024-12-07 17:39:44.324519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:11.070 [2024-12-07 17:39:44.324528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:11.070 [2024-12-07 17:39:44.324548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:11.070 [2024-12-07 17:39:44.324554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:11.070 [2024-12-07 17:39:44.324562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:11.070 [2024-12-07 17:39:44.324569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:11.070 [2024-12-07 17:39:44.324575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:11.070 [2024-12-07 17:39:44.324582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:11.070 [2024-12-07 17:39:44.324590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:11.070 [2024-12-07 17:39:44.324598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:11.070 [2024-12-07 17:39:44.324604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:11.070 [2024-12-07 17:39:44.324641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:11.070 [2024-12-07 17:39:44.324649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:11.070 [2024-12-07 17:39:44.324668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:11.070 [2024-12-07 17:39:44.324676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:11.070 [2024-12-07 17:39:44.324684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:11.070 [2024-12-07 17:39:44.324692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.324699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:11.070 [2024-12-07 17:39:44.324710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:23:11.070 [2024-12-07 17:39:44.324718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.357048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.357097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.070 [2024-12-07 17:39:44.357111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.280 ms 00:23:11.070 [2024-12-07 17:39:44.357123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.357218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.357228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:11.070 [2024-12-07 17:39:44.357236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:11.070 [2024-12-07 17:39:44.357245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.402566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.402620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.070 [2024-12-07 17:39:44.402634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.253 ms 00:23:11.070 [2024-12-07 17:39:44.402643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.402694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.402705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.070 [2024-12-07 17:39:44.402717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.070 [2024-12-07 17:39:44.402726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.403338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.403363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.070 [2024-12-07 17:39:44.403375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:23:11.070 [2024-12-07 17:39:44.403383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.403545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.403562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.070 [2024-12-07 17:39:44.403578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:11.070 [2024-12-07 17:39:44.403586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.419383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.070 [2024-12-07 17:39:44.419587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.070 [2024-12-07 17:39:44.419608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.776 ms 00:23:11.070 [2024-12-07 17:39:44.419616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.070 [2024-12-07 17:39:44.434412] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:11.071 [2024-12-07 17:39:44.434600] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:11.071 [2024-12-07 17:39:44.434619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.071 [2024-12-07 17:39:44.434627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:11.071 [2024-12-07 17:39:44.434638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.881 ms 00:23:11.071 [2024-12-07 17:39:44.434645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.461049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.461258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:11.333 [2024-12-07 17:39:44.461283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.283 ms 00:23:11.333 [2024-12-07 17:39:44.461293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.474426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.474473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:11.333 [2024-12-07 17:39:44.474486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:23:11.333 [2024-12-07 17:39:44.474495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.486940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.487001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:11.333 [2024-12-07 17:39:44.487015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.397 ms 00:23:11.333 [2024-12-07 17:39:44.487022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.487707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.487733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:11.333 [2024-12-07 17:39:44.487748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:23:11.333 [2024-12-07 17:39:44.487756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.555655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.555712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:11.333 [2024-12-07 17:39:44.555735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.879 ms 00:23:11.333 [2024-12-07 17:39:44.555744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.567009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:11.333 [2024-12-07 17:39:44.570216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.570262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:11.333 [2024-12-07 17:39:44.570274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.415 ms 00:23:11.333 [2024-12-07 17:39:44.570283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.570369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.570379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:11.333 [2024-12-07 17:39:44.570392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:11.333 [2024-12-07 17:39:44.570401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.570474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.570485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:11.333 [2024-12-07 17:39:44.570494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:11.333 [2024-12-07 17:39:44.570503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.570524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.570533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:11.333 [2024-12-07 17:39:44.570542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:11.333 [2024-12-07 17:39:44.570550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.570591] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:11.333 [2024-12-07 17:39:44.570602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.570610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:11.333 [2024-12-07 17:39:44.570619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:11.333 [2024-12-07 17:39:44.570627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.596474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.596524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:11.333 [2024-12-07 17:39:44.596544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.826 ms 00:23:11.333 [2024-12-07 17:39:44.596552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.596639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.333 [2024-12-07 17:39:44.596649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:11.333 [2024-12-07 17:39:44.596659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:11.333 [2024-12-07 17:39:44.596668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.333 [2024-12-07 17:39:44.598247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.314 ms, result 0 00:23:12.287  [2024-12-07T17:39:47.052Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-07T17:39:47.618Z] Copying: 29/1024 [MB] (12 MBps) [2024-12-07T17:39:49.001Z] Copying: 51/1024 [MB] (22 MBps) [2024-12-07T17:39:49.939Z] Copying: 67/1024 [MB] (16 MBps) [2024-12-07T17:39:50.873Z] Copying: 82/1024 [MB] (15 MBps) [2024-12-07T17:39:51.816Z] Copying: 108/1024 [MB] (26 MBps) [2024-12-07T17:39:52.759Z] Copying: 130/1024 [MB] (21 MBps) [2024-12-07T17:39:53.699Z] Copying: 146/1024 [MB] (16 MBps) [2024-12-07T17:39:54.633Z] Copying: 161/1024 [MB] (14 MBps) [2024-12-07T17:39:56.017Z] Copying: 180/1024 [MB] (18 MBps) [2024-12-07T17:39:56.949Z] Copying: 197/1024 [MB] (17 MBps) [2024-12-07T17:39:57.880Z] Copying: 216/1024 [MB] (19 MBps) [2024-12-07T17:39:58.811Z] Copying: 236/1024 [MB] (19 MBps) [2024-12-07T17:39:59.782Z] Copying: 256/1024 [MB] (20 MBps) [2024-12-07T17:40:00.723Z] Copying: 286/1024 [MB] (30 MBps) [2024-12-07T17:40:01.667Z] Copying: 304/1024 [MB] (17 MBps) [2024-12-07T17:40:02.612Z] Copying: 321/1024 [MB] (16 MBps) [2024-12-07T17:40:04.018Z] Copying: 339/1024 [MB] (18 MBps) [2024-12-07T17:40:04.955Z] Copying: 352/1024 [MB] (12 MBps) [2024-12-07T17:40:05.888Z] Copying: 366/1024 [MB] (14 MBps) [2024-12-07T17:40:06.822Z] Copying: 393/1024 [MB] (27 MBps) [2024-12-07T17:40:07.762Z] Copying: 425/1024 [MB] (31 MBps) [2024-12-07T17:40:08.700Z] Copying: 442/1024 [MB] (16 MBps) [2024-12-07T17:40:09.632Z] Copying: 454/1024 [MB] (12 MBps) [2024-12-07T17:40:11.002Z] Copying: 472/1024 [MB] (17 MBps) [2024-12-07T17:40:11.933Z] Copying: 490/1024 [MB] (18 MBps) [2024-12-07T17:40:12.871Z] Copying: 518/1024 [MB] (28 MBps) [2024-12-07T17:40:13.809Z] Copying: 546/1024 [MB] (27 MBps) [2024-12-07T17:40:14.753Z] Copying: 571/1024 [MB] (25 MBps) [2024-12-07T17:40:15.684Z] Copying: 588/1024 [MB] (16 MBps) [2024-12-07T17:40:16.615Z] Copying: 609/1024 [MB] (20 MBps) [2024-12-07T17:40:18.000Z] Copying: 640/1024 [MB] (31 MBps) [2024-12-07T17:40:18.949Z] Copying: 660/1024 [MB] (19 MBps) [2024-12-07T17:40:19.892Z] Copying: 678/1024 [MB] (18 MBps) [2024-12-07T17:40:20.835Z] Copying: 695/1024 [MB] (16 MBps) [2024-12-07T17:40:21.779Z] Copying: 709/1024 [MB] (14 MBps) [2024-12-07T17:40:22.722Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-07T17:40:23.666Z] Copying: 739/1024 [MB] (13 MBps) [2024-12-07T17:40:25.054Z] Copying: 751/1024 [MB] (11 MBps) [2024-12-07T17:40:25.624Z] Copying: 761/1024 [MB] (10 MBps) [2024-12-07T17:40:27.009Z] Copying: 775/1024 [MB] (13 MBps) [2024-12-07T17:40:27.951Z] Copying: 790/1024 [MB] (15 MBps) [2024-12-07T17:40:28.888Z] Copying: 809/1024 [MB] (19 MBps) [2024-12-07T17:40:29.827Z] Copying: 831/1024 [MB] (21 MBps) [2024-12-07T17:40:30.770Z] Copying: 853/1024 [MB] (22 MBps) [2024-12-07T17:40:31.756Z] Copying: 876/1024 [MB] (23 MBps) [2024-12-07T17:40:32.698Z] Copying: 893/1024 [MB] (16 MBps) [2024-12-07T17:40:33.633Z] Copying: 914/1024 [MB] (21 MBps) [2024-12-07T17:40:35.003Z] Copying: 931/1024 [MB] (16 MBps) [2024-12-07T17:40:35.941Z] Copying: 953/1024 [MB] (22 MBps) [2024-12-07T17:40:36.881Z] Copying: 983/1024 [MB] (29 MBps) [2024-12-07T17:40:37.824Z] Copying: 994/1024 [MB] (11 MBps) [2024-12-07T17:40:38.765Z] Copying: 1012/1024 [MB] (17 MBps) [2024-12-07T17:40:39.026Z] Copying: 1023/1024 [MB] (10 MBps) [2024-12-07T17:40:39.026Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-07 17:40:38.933989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:38.934093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.644 [2024-12-07 17:40:38.934125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:05.644 [2024-12-07 17:40:38.934136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.644 [2024-12-07 17:40:38.935496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.644 [2024-12-07 17:40:38.942495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:38.942549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.644 [2024-12-07 17:40:38.942563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:24:05.644 [2024-12-07 17:40:38.942574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.644 [2024-12-07 17:40:38.955988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:38.956040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.644 [2024-12-07 17:40:38.956055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.058 ms 00:24:05.644 [2024-12-07 17:40:38.956075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.644 [2024-12-07 17:40:38.981503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:38.981555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.644 [2024-12-07 17:40:38.981569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.394 ms 00:24:05.644 [2024-12-07 17:40:38.981579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.644 [2024-12-07 17:40:38.987730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:38.987778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:05.644 [2024-12-07 17:40:38.987793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:24:05.644 [2024-12-07 17:40:38.987811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.644 [2024-12-07 17:40:39.016445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.644 [2024-12-07 17:40:39.016498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.644 [2024-12-07 17:40:39.016514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.582 ms 00:24:05.644 [2024-12-07 17:40:39.016523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.905 [2024-12-07 17:40:39.033881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.905 [2024-12-07 17:40:39.033933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.905 [2024-12-07 17:40:39.033947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.302 ms 00:24:05.905 [2024-12-07 17:40:39.033956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.297056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.165 [2024-12-07 17:40:39.297084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:06.165 [2024-12-07 17:40:39.297094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 263.022 ms 00:24:06.165 [2024-12-07 17:40:39.297101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.315266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.165 [2024-12-07 17:40:39.315293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:06.165 [2024-12-07 17:40:39.315302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:24:06.165 [2024-12-07 17:40:39.315309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.344951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.165 [2024-12-07 17:40:39.344998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:06.165 [2024-12-07 17:40:39.345010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.612 ms 00:24:06.165 [2024-12-07 17:40:39.345017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.362813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.165 [2024-12-07 17:40:39.362843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:06.165 [2024-12-07 17:40:39.362851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.761 ms 00:24:06.165 [2024-12-07 17:40:39.362858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.380118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.165 [2024-12-07 17:40:39.380145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:06.165 [2024-12-07 17:40:39.380153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.213 ms 00:24:06.165 [2024-12-07 17:40:39.380159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.165 [2024-12-07 17:40:39.380184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:06.165 [2024-12-07 17:40:39.380196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95232 / 261120 wr_cnt: 1 state: open 00:24:06.165 [2024-12-07 17:40:39.380204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:06.165 [2024-12-07 17:40:39.380334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:06.166 [2024-12-07 17:40:39.380884] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:06.166 [2024-12-07 17:40:39.380891] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:24:06.166 [2024-12-07 17:40:39.380897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95232 00:24:06.166 [2024-12-07 17:40:39.380903] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96192 00:24:06.166 [2024-12-07 17:40:39.380909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95232 00:24:06.166 [2024-12-07 17:40:39.380915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:24:06.166 [2024-12-07 17:40:39.380929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:06.166 [2024-12-07 17:40:39.380935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:06.166 [2024-12-07 17:40:39.380940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:06.166 [2024-12-07 17:40:39.380946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:06.166 [2024-12-07 17:40:39.380951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:06.166 [2024-12-07 17:40:39.380956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.166 [2024-12-07 17:40:39.380962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:06.166 [2024-12-07 17:40:39.380968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:24:06.166 [2024-12-07 17:40:39.380973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.166 [2024-12-07 17:40:39.390400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.167 [2024-12-07 17:40:39.390425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:06.167 [2024-12-07 17:40:39.390436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.405 ms 00:24:06.167 [2024-12-07 17:40:39.390442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.390708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.167 [2024-12-07 17:40:39.390721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:06.167 [2024-12-07 17:40:39.390728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:24:06.167 [2024-12-07 17:40:39.390733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.416396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.416423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.167 [2024-12-07 17:40:39.416431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.416436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.416483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.416489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.167 [2024-12-07 17:40:39.416496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.416501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.416543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.416553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.167 [2024-12-07 17:40:39.416559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.416565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.416577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.416584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.167 [2024-12-07 17:40:39.416589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.416595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.475202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.475240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.167 [2024-12-07 17:40:39.475249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.475254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.523687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.167 [2024-12-07 17:40:39.523696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.523702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.523764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.167 [2024-12-07 17:40:39.523770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.523779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.523811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.167 [2024-12-07 17:40:39.523817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.523824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.523897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.167 [2024-12-07 17:40:39.523904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.523912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.523940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:06.167 [2024-12-07 17:40:39.523947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.523953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.523996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.524004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.167 [2024-12-07 17:40:39.524010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.524015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.524053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.167 [2024-12-07 17:40:39.524060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.167 [2024-12-07 17:40:39.524066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.167 [2024-12-07 17:40:39.524071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.167 [2024-12-07 17:40:39.524164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 592.747 ms, result 0 00:24:07.548 00:24:07.548 00:24:07.548 17:40:40 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:07.548 [2024-12-07 17:40:40.688968] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:24:07.548 [2024-12-07 17:40:40.689117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79336 ] 00:24:07.548 [2024-12-07 17:40:40.851430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.810 [2024-12-07 17:40:40.973097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.073 [2024-12-07 17:40:41.274209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.073 [2024-12-07 17:40:41.274297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:08.073 [2024-12-07 17:40:41.438132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.073 [2024-12-07 17:40:41.438198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:08.073 [2024-12-07 17:40:41.438214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:08.073 [2024-12-07 17:40:41.438223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.073 [2024-12-07 17:40:41.438282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.073 [2024-12-07 17:40:41.438295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:08.073 [2024-12-07 17:40:41.438305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:08.073 [2024-12-07 17:40:41.438313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.073 [2024-12-07 17:40:41.438335] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:08.073 [2024-12-07 17:40:41.439122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:08.073 [2024-12-07 17:40:41.439157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.073 [2024-12-07 17:40:41.439166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:08.073 [2024-12-07 17:40:41.439175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:24:08.073 [2024-12-07 17:40:41.439183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.073 [2024-12-07 17:40:41.440927] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:08.337 [2024-12-07 17:40:41.455600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.455654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:08.337 [2024-12-07 17:40:41.455669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.676 ms 00:24:08.337 [2024-12-07 17:40:41.455677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.455769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.455779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:08.337 [2024-12-07 17:40:41.455789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:08.337 [2024-12-07 17:40:41.455798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.464355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.464402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:08.337 [2024-12-07 17:40:41.464412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.474 ms 00:24:08.337 [2024-12-07 17:40:41.464428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.464511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.464520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:08.337 [2024-12-07 17:40:41.464529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:08.337 [2024-12-07 17:40:41.464538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.464584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.464596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:08.337 [2024-12-07 17:40:41.464605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:08.337 [2024-12-07 17:40:41.464614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.464641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.337 [2024-12-07 17:40:41.468852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.468896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:08.337 [2024-12-07 17:40:41.468910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.217 ms 00:24:08.337 [2024-12-07 17:40:41.468919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.468958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.468967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:08.337 [2024-12-07 17:40:41.468976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:08.337 [2024-12-07 17:40:41.468996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.469053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:08.337 [2024-12-07 17:40:41.469080] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:08.337 [2024-12-07 17:40:41.469117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:08.337 [2024-12-07 17:40:41.469137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:08.337 [2024-12-07 17:40:41.469249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:08.337 [2024-12-07 17:40:41.469261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:08.337 [2024-12-07 17:40:41.469272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:08.337 [2024-12-07 17:40:41.469282] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469292] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469301] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:08.337 [2024-12-07 17:40:41.469308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:08.337 [2024-12-07 17:40:41.469320] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:08.337 [2024-12-07 17:40:41.469328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:08.337 [2024-12-07 17:40:41.469337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.469344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:08.337 [2024-12-07 17:40:41.469352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:24:08.337 [2024-12-07 17:40:41.469360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.469447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.337 [2024-12-07 17:40:41.469457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:08.337 [2024-12-07 17:40:41.469465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:08.337 [2024-12-07 17:40:41.469472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.337 [2024-12-07 17:40:41.469595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:08.337 [2024-12-07 17:40:41.469639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:08.337 [2024-12-07 17:40:41.469648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:08.337 [2024-12-07 17:40:41.469673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:08.337 [2024-12-07 17:40:41.469695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.337 [2024-12-07 17:40:41.469709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:08.337 [2024-12-07 17:40:41.469719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:08.337 [2024-12-07 17:40:41.469727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:08.337 [2024-12-07 17:40:41.469742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:08.337 [2024-12-07 17:40:41.469749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:08.337 [2024-12-07 17:40:41.469756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:08.337 [2024-12-07 17:40:41.469769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:08.337 [2024-12-07 17:40:41.469791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:08.337 [2024-12-07 17:40:41.469810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:08.337 [2024-12-07 17:40:41.469831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:08.337 [2024-12-07 17:40:41.469852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:08.337 [2024-12-07 17:40:41.469866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:08.337 [2024-12-07 17:40:41.469874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:08.337 [2024-12-07 17:40:41.469881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.337 [2024-12-07 17:40:41.469888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:08.337 [2024-12-07 17:40:41.469895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:08.337 [2024-12-07 17:40:41.469902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:08.337 [2024-12-07 17:40:41.469909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:08.337 [2024-12-07 17:40:41.469916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:08.337 [2024-12-07 17:40:41.469922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.338 [2024-12-07 17:40:41.469930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:08.338 [2024-12-07 17:40:41.469937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:08.338 [2024-12-07 17:40:41.469944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.338 [2024-12-07 17:40:41.469953] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:08.338 [2024-12-07 17:40:41.469963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:08.338 [2024-12-07 17:40:41.469970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:08.338 [2024-12-07 17:40:41.469979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:08.338 [2024-12-07 17:40:41.470032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:08.338 [2024-12-07 17:40:41.470041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:08.338 [2024-12-07 17:40:41.470049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:08.338 [2024-12-07 17:40:41.470057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:08.338 [2024-12-07 17:40:41.470064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:08.338 [2024-12-07 17:40:41.470071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:08.338 [2024-12-07 17:40:41.470080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:08.338 [2024-12-07 17:40:41.470091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:08.338 [2024-12-07 17:40:41.470112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:08.338 [2024-12-07 17:40:41.470120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:08.338 [2024-12-07 17:40:41.470127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:08.338 [2024-12-07 17:40:41.470135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:08.338 [2024-12-07 17:40:41.470142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:08.338 [2024-12-07 17:40:41.470150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:08.338 [2024-12-07 17:40:41.470157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:08.338 [2024-12-07 17:40:41.470165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:08.338 [2024-12-07 17:40:41.470172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:08.338 [2024-12-07 17:40:41.470208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:08.338 [2024-12-07 17:40:41.470217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:08.338 [2024-12-07 17:40:41.470233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:08.338 [2024-12-07 17:40:41.470240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:08.338 [2024-12-07 17:40:41.470246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:08.338 [2024-12-07 17:40:41.470255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.470262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:08.338 [2024-12-07 17:40:41.470271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:24:08.338 [2024-12-07 17:40:41.470279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.503250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.503305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:08.338 [2024-12-07 17:40:41.503318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.921 ms 00:24:08.338 [2024-12-07 17:40:41.503331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.503428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.503437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:08.338 [2024-12-07 17:40:41.503447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:08.338 [2024-12-07 17:40:41.503456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.548396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.548454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:08.338 [2024-12-07 17:40:41.548469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.874 ms 00:24:08.338 [2024-12-07 17:40:41.548478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.548531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.548542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:08.338 [2024-12-07 17:40:41.548555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:08.338 [2024-12-07 17:40:41.548563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.549238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.549277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:08.338 [2024-12-07 17:40:41.549290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:24:08.338 [2024-12-07 17:40:41.549298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.549464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.549482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:08.338 [2024-12-07 17:40:41.549524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:08.338 [2024-12-07 17:40:41.549532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.565456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.565518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:08.338 [2024-12-07 17:40:41.565530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:24:08.338 [2024-12-07 17:40:41.565538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.580554] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:08.338 [2024-12-07 17:40:41.580606] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:08.338 [2024-12-07 17:40:41.580620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.580629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:08.338 [2024-12-07 17:40:41.580639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.967 ms 00:24:08.338 [2024-12-07 17:40:41.580647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.607170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.607222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:08.338 [2024-12-07 17:40:41.607235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.461 ms 00:24:08.338 [2024-12-07 17:40:41.607243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.620891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.620942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:08.338 [2024-12-07 17:40:41.620953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.588 ms 00:24:08.338 [2024-12-07 17:40:41.620961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.633998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.634045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:08.338 [2024-12-07 17:40:41.634056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.971 ms 00:24:08.338 [2024-12-07 17:40:41.634065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.634716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.634750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.338 [2024-12-07 17:40:41.634764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:24:08.338 [2024-12-07 17:40:41.634772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.702055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.338 [2024-12-07 17:40:41.702114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:08.338 [2024-12-07 17:40:41.702137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.263 ms 00:24:08.338 [2024-12-07 17:40:41.702146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.338 [2024-12-07 17:40:41.713266] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:08.600 [2024-12-07 17:40:41.716460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.716507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:08.600 [2024-12-07 17:40:41.716520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.252 ms 00:24:08.600 [2024-12-07 17:40:41.716528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.716621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.716632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:08.600 [2024-12-07 17:40:41.716645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:08.600 [2024-12-07 17:40:41.716653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.718423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.718474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.600 [2024-12-07 17:40:41.718486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:24:08.600 [2024-12-07 17:40:41.718494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.718525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.718534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.600 [2024-12-07 17:40:41.718544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:08.600 [2024-12-07 17:40:41.718553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.718602] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:08.600 [2024-12-07 17:40:41.718614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.718625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:08.600 [2024-12-07 17:40:41.718635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:08.600 [2024-12-07 17:40:41.718643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.744692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.744745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.600 [2024-12-07 17:40:41.744765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.029 ms 00:24:08.600 [2024-12-07 17:40:41.744774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.744868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.600 [2024-12-07 17:40:41.744880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.600 [2024-12-07 17:40:41.744889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:08.600 [2024-12-07 17:40:41.744897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.600 [2024-12-07 17:40:41.746266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.596 ms, result 0 00:24:09.989  [2024-12-07T17:40:43.942Z] Copying: 8384/1048576 [kB] (8384 kBps) [2024-12-07T17:40:45.330Z] Copying: 18/1024 [MB] (10 MBps) [2024-12-07T17:40:46.270Z] Copying: 29/1024 [MB] (10 MBps) [2024-12-07T17:40:47.210Z] Copying: 39/1024 [MB] (10 MBps) [2024-12-07T17:40:48.152Z] Copying: 52/1024 [MB] (12 MBps) [2024-12-07T17:40:49.094Z] Copying: 62/1024 [MB] (10 MBps) [2024-12-07T17:40:50.035Z] Copying: 73/1024 [MB] (10 MBps) [2024-12-07T17:40:50.985Z] Copying: 87/1024 [MB] (13 MBps) [2024-12-07T17:40:52.369Z] Copying: 99/1024 [MB] (12 MBps) [2024-12-07T17:40:53.311Z] Copying: 121/1024 [MB] (21 MBps) [2024-12-07T17:40:54.253Z] Copying: 137/1024 [MB] (15 MBps) [2024-12-07T17:40:55.196Z] Copying: 149/1024 [MB] (12 MBps) [2024-12-07T17:40:56.141Z] Copying: 160/1024 [MB] (11 MBps) [2024-12-07T17:40:57.084Z] Copying: 171/1024 [MB] (10 MBps) [2024-12-07T17:40:58.024Z] Copying: 184/1024 [MB] (13 MBps) [2024-12-07T17:40:58.967Z] Copying: 209/1024 [MB] (25 MBps) [2024-12-07T17:41:00.350Z] Copying: 230/1024 [MB] (21 MBps) [2024-12-07T17:41:01.292Z] Copying: 246/1024 [MB] (15 MBps) [2024-12-07T17:41:02.310Z] Copying: 264/1024 [MB] (18 MBps) [2024-12-07T17:41:03.302Z] Copying: 284/1024 [MB] (20 MBps) [2024-12-07T17:41:04.244Z] Copying: 304/1024 [MB] (20 MBps) [2024-12-07T17:41:05.187Z] Copying: 328/1024 [MB] (23 MBps) [2024-12-07T17:41:06.127Z] Copying: 348/1024 [MB] (19 MBps) [2024-12-07T17:41:07.065Z] Copying: 368/1024 [MB] (20 MBps) [2024-12-07T17:41:08.007Z] Copying: 387/1024 [MB] (19 MBps) [2024-12-07T17:41:08.951Z] Copying: 407/1024 [MB] (20 MBps) [2024-12-07T17:41:10.339Z] Copying: 427/1024 [MB] (19 MBps) [2024-12-07T17:41:11.282Z] Copying: 442/1024 [MB] (15 MBps) [2024-12-07T17:41:12.237Z] Copying: 458/1024 [MB] (16 MBps) [2024-12-07T17:41:13.182Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-07T17:41:14.127Z] Copying: 488/1024 [MB] (18 MBps) [2024-12-07T17:41:15.072Z] Copying: 498/1024 [MB] (10 MBps) [2024-12-07T17:41:16.022Z] Copying: 514/1024 [MB] (16 MBps) [2024-12-07T17:41:16.964Z] Copying: 530/1024 [MB] (15 MBps) [2024-12-07T17:41:18.347Z] Copying: 544/1024 [MB] (14 MBps) [2024-12-07T17:41:19.291Z] Copying: 564/1024 [MB] (19 MBps) [2024-12-07T17:41:20.235Z] Copying: 575/1024 [MB] (10 MBps) [2024-12-07T17:41:21.180Z] Copying: 585/1024 [MB] (10 MBps) [2024-12-07T17:41:22.126Z] Copying: 600/1024 [MB] (14 MBps) [2024-12-07T17:41:23.073Z] Copying: 613/1024 [MB] (13 MBps) [2024-12-07T17:41:24.018Z] Copying: 625/1024 [MB] (12 MBps) [2024-12-07T17:41:24.963Z] Copying: 636/1024 [MB] (10 MBps) [2024-12-07T17:41:26.347Z] Copying: 647/1024 [MB] (10 MBps) [2024-12-07T17:41:27.287Z] Copying: 657/1024 [MB] (10 MBps) [2024-12-07T17:41:28.232Z] Copying: 673/1024 [MB] (15 MBps) [2024-12-07T17:41:29.178Z] Copying: 691/1024 [MB] (18 MBps) [2024-12-07T17:41:30.122Z] Copying: 709/1024 [MB] (17 MBps) [2024-12-07T17:41:31.065Z] Copying: 730/1024 [MB] (21 MBps) [2024-12-07T17:41:32.010Z] Copying: 751/1024 [MB] (21 MBps) [2024-12-07T17:41:32.957Z] Copying: 765/1024 [MB] (13 MBps) [2024-12-07T17:41:34.410Z] Copying: 783/1024 [MB] (18 MBps) [2024-12-07T17:41:34.985Z] Copying: 803/1024 [MB] (19 MBps) [2024-12-07T17:41:36.370Z] Copying: 824/1024 [MB] (20 MBps) [2024-12-07T17:41:36.941Z] Copying: 838/1024 [MB] (13 MBps) [2024-12-07T17:41:38.327Z] Copying: 849/1024 [MB] (11 MBps) [2024-12-07T17:41:39.269Z] Copying: 866/1024 [MB] (17 MBps) [2024-12-07T17:41:40.211Z] Copying: 877/1024 [MB] (10 MBps) [2024-12-07T17:41:41.155Z] Copying: 887/1024 [MB] (10 MBps) [2024-12-07T17:41:42.099Z] Copying: 900/1024 [MB] (13 MBps) [2024-12-07T17:41:43.043Z] Copying: 916/1024 [MB] (15 MBps) [2024-12-07T17:41:43.986Z] Copying: 935/1024 [MB] (18 MBps) [2024-12-07T17:41:45.373Z] Copying: 955/1024 [MB] (19 MBps) [2024-12-07T17:41:45.941Z] Copying: 965/1024 [MB] (10 MBps) [2024-12-07T17:41:47.325Z] Copying: 978/1024 [MB] (12 MBps) [2024-12-07T17:41:48.271Z] Copying: 992/1024 [MB] (14 MBps) [2024-12-07T17:41:49.219Z] Copying: 1007/1024 [MB] (14 MBps) [2024-12-07T17:41:49.481Z] Copying: 1018/1024 [MB] (11 MBps) [2024-12-07T17:41:49.481Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-07 17:41:49.263074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.099 [2024-12-07 17:41:49.263119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.099 [2024-12-07 17:41:49.263138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:16.099 [2024-12-07 17:41:49.263147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.099 [2024-12-07 17:41:49.263167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.099 [2024-12-07 17:41:49.265782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.099 [2024-12-07 17:41:49.265811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.099 [2024-12-07 17:41:49.265821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:25:16.099 [2024-12-07 17:41:49.265830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.099 [2024-12-07 17:41:49.266043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.099 [2024-12-07 17:41:49.266054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.099 [2024-12-07 17:41:49.266063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:16.099 [2024-12-07 17:41:49.266074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.099 [2024-12-07 17:41:49.271802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.099 [2024-12-07 17:41:49.271836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.100 [2024-12-07 17:41:49.271845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.714 ms 00:25:16.100 [2024-12-07 17:41:49.271852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.100 [2024-12-07 17:41:49.278039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.100 [2024-12-07 17:41:49.278067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.100 [2024-12-07 17:41:49.278076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:25:16.100 [2024-12-07 17:41:49.278088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.100 [2024-12-07 17:41:49.302452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.100 [2024-12-07 17:41:49.302484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.100 [2024-12-07 17:41:49.302495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.315 ms 00:25:16.100 [2024-12-07 17:41:49.302503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.100 [2024-12-07 17:41:49.317268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.100 [2024-12-07 17:41:49.317302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.100 [2024-12-07 17:41:49.317313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.732 ms 00:25:16.100 [2024-12-07 17:41:49.317320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.364 [2024-12-07 17:41:49.555798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.364 [2024-12-07 17:41:49.555855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.364 [2024-12-07 17:41:49.555869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 238.437 ms 00:25:16.364 [2024-12-07 17:41:49.555878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.364 [2024-12-07 17:41:49.582469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.364 [2024-12-07 17:41:49.582517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.364 [2024-12-07 17:41:49.582531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.573 ms 00:25:16.364 [2024-12-07 17:41:49.582539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.364 [2024-12-07 17:41:49.607898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.364 [2024-12-07 17:41:49.607958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.364 [2024-12-07 17:41:49.607970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.312 ms 00:25:16.364 [2024-12-07 17:41:49.607978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.364 [2024-12-07 17:41:49.632926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.364 [2024-12-07 17:41:49.632971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.364 [2024-12-07 17:41:49.632993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.892 ms 00:25:16.364 [2024-12-07 17:41:49.633001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.364 [2024-12-07 17:41:49.657760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.364 [2024-12-07 17:41:49.657802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.365 [2024-12-07 17:41:49.657814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.687 ms 00:25:16.365 [2024-12-07 17:41:49.657822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.365 [2024-12-07 17:41:49.657866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.365 [2024-12-07 17:41:49.657883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:25:16.365 [2024-12-07 17:41:49.657894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.657994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.365 [2024-12-07 17:41:49.658585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.366 [2024-12-07 17:41:49.658733] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.366 [2024-12-07 17:41:49.658741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 246c4a5e-6db5-4225-b39d-2b44d9866ffc 00:25:16.366 [2024-12-07 17:41:49.658751] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:25:16.366 [2024-12-07 17:41:49.658758] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 37568 00:25:16.366 [2024-12-07 17:41:49.658768] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 36608 00:25:16.366 [2024-12-07 17:41:49.658777] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0262 00:25:16.366 [2024-12-07 17:41:49.658788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.366 [2024-12-07 17:41:49.658803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.366 [2024-12-07 17:41:49.658811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.366 [2024-12-07 17:41:49.658818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.366 [2024-12-07 17:41:49.658826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.366 [2024-12-07 17:41:49.658834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.366 [2024-12-07 17:41:49.658842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.366 [2024-12-07 17:41:49.658851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:25:16.366 [2024-12-07 17:41:49.658858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.672806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.366 [2024-12-07 17:41:49.672847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.366 [2024-12-07 17:41:49.672866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.928 ms 00:25:16.366 [2024-12-07 17:41:49.672874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.673290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.366 [2024-12-07 17:41:49.673318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.366 [2024-12-07 17:41:49.673328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:25:16.366 [2024-12-07 17:41:49.673337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.709587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.366 [2024-12-07 17:41:49.709638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.366 [2024-12-07 17:41:49.709651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.366 [2024-12-07 17:41:49.709661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.709733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.366 [2024-12-07 17:41:49.709744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.366 [2024-12-07 17:41:49.709754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.366 [2024-12-07 17:41:49.709764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.709831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.366 [2024-12-07 17:41:49.709843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.366 [2024-12-07 17:41:49.709859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.366 [2024-12-07 17:41:49.709868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.366 [2024-12-07 17:41:49.709885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.366 [2024-12-07 17:41:49.709895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.366 [2024-12-07 17:41:49.709904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.366 [2024-12-07 17:41:49.709912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.794274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.794340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.627 [2024-12-07 17:41:49.794354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.794362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.627 [2024-12-07 17:41:49.862439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.627 [2024-12-07 17:41:49.862548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.627 [2024-12-07 17:41:49.862620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.627 [2024-12-07 17:41:49.862752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.627 [2024-12-07 17:41:49.862814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.627 [2024-12-07 17:41:49.862883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.862938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.627 [2024-12-07 17:41:49.862950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.627 [2024-12-07 17:41:49.862960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.627 [2024-12-07 17:41:49.862968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.627 [2024-12-07 17:41:49.863136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.016 ms, result 0 00:25:17.571 00:25:17.571 00:25:17.571 17:41:50 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.484 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:19.484 17:41:52 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77291 00:25:19.484 17:41:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77291 ']' 00:25:19.484 17:41:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77291 00:25:19.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77291) - No such process 00:25:19.485 Process with pid 77291 is not found 00:25:19.485 17:41:52 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77291 is not found' 00:25:19.485 Remove shared memory files 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:19.485 17:41:52 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:19.485 ************************************ 00:25:19.485 END TEST ftl_restore 00:25:19.485 ************************************ 00:25:19.485 00:25:19.485 real 4m29.621s 00:25:19.485 user 4m17.955s 00:25:19.485 sys 0m11.660s 00:25:19.485 17:41:52 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.485 17:41:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:19.744 17:41:52 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:19.744 17:41:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:19.744 17:41:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.744 17:41:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:19.744 ************************************ 00:25:19.744 START TEST ftl_dirty_shutdown 00:25:19.744 ************************************ 00:25:19.744 17:41:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:19.744 * Looking for test storage... 00:25:19.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:19.744 17:41:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:19.744 17:41:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:19.744 17:41:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:19.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.744 --rc genhtml_branch_coverage=1 00:25:19.744 --rc genhtml_function_coverage=1 00:25:19.744 --rc genhtml_legend=1 00:25:19.744 --rc geninfo_all_blocks=1 00:25:19.744 --rc geninfo_unexecuted_blocks=1 00:25:19.744 00:25:19.744 ' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:19.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.744 --rc genhtml_branch_coverage=1 00:25:19.744 --rc genhtml_function_coverage=1 00:25:19.744 --rc genhtml_legend=1 00:25:19.744 --rc geninfo_all_blocks=1 00:25:19.744 --rc geninfo_unexecuted_blocks=1 00:25:19.744 00:25:19.744 ' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:19.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.744 --rc genhtml_branch_coverage=1 00:25:19.744 --rc genhtml_function_coverage=1 00:25:19.744 --rc genhtml_legend=1 00:25:19.744 --rc geninfo_all_blocks=1 00:25:19.744 --rc geninfo_unexecuted_blocks=1 00:25:19.744 00:25:19.744 ' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:19.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.744 --rc genhtml_branch_coverage=1 00:25:19.744 --rc genhtml_function_coverage=1 00:25:19.744 --rc genhtml_legend=1 00:25:19.744 --rc geninfo_all_blocks=1 00:25:19.744 --rc geninfo_unexecuted_blocks=1 00:25:19.744 00:25:19.744 ' 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:19.744 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80128 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80128 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80128 ']' 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.745 17:41:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.005 [2024-12-07 17:41:53.147162] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:20.005 [2024-12-07 17:41:53.147258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80128 ] 00:25:20.005 [2024-12-07 17:41:53.298604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.266 [2024-12-07 17:41:53.419558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:20.836 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:21.097 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:21.357 { 00:25:21.357 "name": "nvme0n1", 00:25:21.357 "aliases": [ 00:25:21.357 "13563ab5-4453-4d21-9f6a-e98f90bd6e8a" 00:25:21.357 ], 00:25:21.357 "product_name": "NVMe disk", 00:25:21.357 "block_size": 4096, 00:25:21.357 "num_blocks": 1310720, 00:25:21.357 "uuid": "13563ab5-4453-4d21-9f6a-e98f90bd6e8a", 00:25:21.357 "numa_id": -1, 00:25:21.357 "assigned_rate_limits": { 00:25:21.357 "rw_ios_per_sec": 0, 00:25:21.357 "rw_mbytes_per_sec": 0, 00:25:21.357 "r_mbytes_per_sec": 0, 00:25:21.357 "w_mbytes_per_sec": 0 00:25:21.357 }, 00:25:21.357 "claimed": true, 00:25:21.357 "claim_type": "read_many_write_one", 00:25:21.357 "zoned": false, 00:25:21.357 "supported_io_types": { 00:25:21.357 "read": true, 00:25:21.357 "write": true, 00:25:21.357 "unmap": true, 00:25:21.357 "flush": true, 00:25:21.357 "reset": true, 00:25:21.357 "nvme_admin": true, 00:25:21.357 "nvme_io": true, 00:25:21.357 "nvme_io_md": false, 00:25:21.357 "write_zeroes": true, 00:25:21.357 "zcopy": false, 00:25:21.357 "get_zone_info": false, 00:25:21.357 "zone_management": false, 00:25:21.357 "zone_append": false, 00:25:21.357 "compare": true, 00:25:21.357 "compare_and_write": false, 00:25:21.357 "abort": true, 00:25:21.357 "seek_hole": false, 00:25:21.357 "seek_data": false, 00:25:21.357 "copy": true, 00:25:21.357 "nvme_iov_md": false 00:25:21.357 }, 00:25:21.357 "driver_specific": { 00:25:21.357 "nvme": [ 00:25:21.357 { 00:25:21.357 "pci_address": "0000:00:11.0", 00:25:21.357 "trid": { 00:25:21.357 "trtype": "PCIe", 00:25:21.357 "traddr": "0000:00:11.0" 00:25:21.357 }, 00:25:21.357 "ctrlr_data": { 00:25:21.357 "cntlid": 0, 00:25:21.357 "vendor_id": "0x1b36", 00:25:21.357 "model_number": "QEMU NVMe Ctrl", 00:25:21.357 "serial_number": "12341", 00:25:21.357 "firmware_revision": "8.0.0", 00:25:21.357 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:21.357 "oacs": { 00:25:21.357 "security": 0, 00:25:21.357 "format": 1, 00:25:21.357 "firmware": 0, 00:25:21.357 "ns_manage": 1 00:25:21.357 }, 00:25:21.357 "multi_ctrlr": false, 00:25:21.357 "ana_reporting": false 00:25:21.357 }, 00:25:21.357 "vs": { 00:25:21.357 "nvme_version": "1.4" 00:25:21.357 }, 00:25:21.357 "ns_data": { 00:25:21.357 "id": 1, 00:25:21.357 "can_share": false 00:25:21.357 } 00:25:21.357 } 00:25:21.357 ], 00:25:21.357 "mp_policy": "active_passive" 00:25:21.357 } 00:25:21.357 } 00:25:21.357 ]' 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:21.357 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:21.616 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=b8f611b1-4343-43ba-a4d3-2231064c2d36 00:25:21.617 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:21.617 17:41:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8f611b1-4343-43ba-a4d3-2231064c2d36 00:25:21.877 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:22.138 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=20f20238-93d7-4e14-83de-257d730dff97 00:25:22.138 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 20f20238-93d7-4e14-83de-257d730dff97 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:22.398 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.399 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.399 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:22.399 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:22.399 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:22.399 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:22.657 { 00:25:22.657 "name": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:22.657 "aliases": [ 00:25:22.657 "lvs/nvme0n1p0" 00:25:22.657 ], 00:25:22.657 "product_name": "Logical Volume", 00:25:22.657 "block_size": 4096, 00:25:22.657 "num_blocks": 26476544, 00:25:22.657 "uuid": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:22.657 "assigned_rate_limits": { 00:25:22.657 "rw_ios_per_sec": 0, 00:25:22.657 "rw_mbytes_per_sec": 0, 00:25:22.657 "r_mbytes_per_sec": 0, 00:25:22.657 "w_mbytes_per_sec": 0 00:25:22.657 }, 00:25:22.657 "claimed": false, 00:25:22.657 "zoned": false, 00:25:22.657 "supported_io_types": { 00:25:22.657 "read": true, 00:25:22.657 "write": true, 00:25:22.657 "unmap": true, 00:25:22.657 "flush": false, 00:25:22.657 "reset": true, 00:25:22.657 "nvme_admin": false, 00:25:22.657 "nvme_io": false, 00:25:22.657 "nvme_io_md": false, 00:25:22.657 "write_zeroes": true, 00:25:22.657 "zcopy": false, 00:25:22.657 "get_zone_info": false, 00:25:22.657 "zone_management": false, 00:25:22.657 "zone_append": false, 00:25:22.657 "compare": false, 00:25:22.657 "compare_and_write": false, 00:25:22.657 "abort": false, 00:25:22.657 "seek_hole": true, 00:25:22.657 "seek_data": true, 00:25:22.657 "copy": false, 00:25:22.657 "nvme_iov_md": false 00:25:22.657 }, 00:25:22.657 "driver_specific": { 00:25:22.657 "lvol": { 00:25:22.657 "lvol_store_uuid": "20f20238-93d7-4e14-83de-257d730dff97", 00:25:22.657 "base_bdev": "nvme0n1", 00:25:22.657 "thin_provision": true, 00:25:22.657 "num_allocated_clusters": 0, 00:25:22.657 "snapshot": false, 00:25:22.657 "clone": false, 00:25:22.657 "esnap_clone": false 00:25:22.657 } 00:25:22.657 } 00:25:22.657 } 00:25:22.657 ]' 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:22.657 17:41:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:22.916 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:23.173 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:23.173 { 00:25:23.173 "name": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:23.173 "aliases": [ 00:25:23.173 "lvs/nvme0n1p0" 00:25:23.173 ], 00:25:23.173 "product_name": "Logical Volume", 00:25:23.173 "block_size": 4096, 00:25:23.173 "num_blocks": 26476544, 00:25:23.173 "uuid": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:23.173 "assigned_rate_limits": { 00:25:23.173 "rw_ios_per_sec": 0, 00:25:23.173 "rw_mbytes_per_sec": 0, 00:25:23.173 "r_mbytes_per_sec": 0, 00:25:23.173 "w_mbytes_per_sec": 0 00:25:23.173 }, 00:25:23.173 "claimed": false, 00:25:23.173 "zoned": false, 00:25:23.173 "supported_io_types": { 00:25:23.173 "read": true, 00:25:23.173 "write": true, 00:25:23.173 "unmap": true, 00:25:23.173 "flush": false, 00:25:23.173 "reset": true, 00:25:23.173 "nvme_admin": false, 00:25:23.173 "nvme_io": false, 00:25:23.173 "nvme_io_md": false, 00:25:23.173 "write_zeroes": true, 00:25:23.173 "zcopy": false, 00:25:23.173 "get_zone_info": false, 00:25:23.173 "zone_management": false, 00:25:23.173 "zone_append": false, 00:25:23.174 "compare": false, 00:25:23.174 "compare_and_write": false, 00:25:23.174 "abort": false, 00:25:23.174 "seek_hole": true, 00:25:23.174 "seek_data": true, 00:25:23.174 "copy": false, 00:25:23.174 "nvme_iov_md": false 00:25:23.174 }, 00:25:23.174 "driver_specific": { 00:25:23.174 "lvol": { 00:25:23.174 "lvol_store_uuid": "20f20238-93d7-4e14-83de-257d730dff97", 00:25:23.174 "base_bdev": "nvme0n1", 00:25:23.174 "thin_provision": true, 00:25:23.174 "num_allocated_clusters": 0, 00:25:23.174 "snapshot": false, 00:25:23.174 "clone": false, 00:25:23.174 "esnap_clone": false 00:25:23.174 } 00:25:23.174 } 00:25:23.174 } 00:25:23.174 ]' 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:23.174 17:41:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45eb1f54-edca-494b-9aef-b372fe46f5ea 00:25:23.431 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:23.431 { 00:25:23.431 "name": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:23.431 "aliases": [ 00:25:23.431 "lvs/nvme0n1p0" 00:25:23.431 ], 00:25:23.432 "product_name": "Logical Volume", 00:25:23.432 "block_size": 4096, 00:25:23.432 "num_blocks": 26476544, 00:25:23.432 "uuid": "45eb1f54-edca-494b-9aef-b372fe46f5ea", 00:25:23.432 "assigned_rate_limits": { 00:25:23.432 "rw_ios_per_sec": 0, 00:25:23.432 "rw_mbytes_per_sec": 0, 00:25:23.432 "r_mbytes_per_sec": 0, 00:25:23.432 "w_mbytes_per_sec": 0 00:25:23.432 }, 00:25:23.432 "claimed": false, 00:25:23.432 "zoned": false, 00:25:23.432 "supported_io_types": { 00:25:23.432 "read": true, 00:25:23.432 "write": true, 00:25:23.432 "unmap": true, 00:25:23.432 "flush": false, 00:25:23.432 "reset": true, 00:25:23.432 "nvme_admin": false, 00:25:23.432 "nvme_io": false, 00:25:23.432 "nvme_io_md": false, 00:25:23.432 "write_zeroes": true, 00:25:23.432 "zcopy": false, 00:25:23.432 "get_zone_info": false, 00:25:23.432 "zone_management": false, 00:25:23.432 "zone_append": false, 00:25:23.432 "compare": false, 00:25:23.432 "compare_and_write": false, 00:25:23.432 "abort": false, 00:25:23.432 "seek_hole": true, 00:25:23.432 "seek_data": true, 00:25:23.432 "copy": false, 00:25:23.432 "nvme_iov_md": false 00:25:23.432 }, 00:25:23.432 "driver_specific": { 00:25:23.432 "lvol": { 00:25:23.432 "lvol_store_uuid": "20f20238-93d7-4e14-83de-257d730dff97", 00:25:23.432 "base_bdev": "nvme0n1", 00:25:23.432 "thin_provision": true, 00:25:23.432 "num_allocated_clusters": 0, 00:25:23.432 "snapshot": false, 00:25:23.432 "clone": false, 00:25:23.432 "esnap_clone": false 00:25:23.432 } 00:25:23.432 } 00:25:23.432 } 00:25:23.432 ]' 00:25:23.432 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 45eb1f54-edca-494b-9aef-b372fe46f5ea --l2p_dram_limit 10' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:23.690 17:41:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 45eb1f54-edca-494b-9aef-b372fe46f5ea --l2p_dram_limit 10 -c nvc0n1p0 00:25:23.690 [2024-12-07 17:41:57.045915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.690 [2024-12-07 17:41:57.046042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:23.690 [2024-12-07 17:41:57.046061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:23.690 [2024-12-07 17:41:57.046068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.690 [2024-12-07 17:41:57.046124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.690 [2024-12-07 17:41:57.046132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.690 [2024-12-07 17:41:57.046140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:23.690 [2024-12-07 17:41:57.046146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.690 [2024-12-07 17:41:57.046166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:23.690 [2024-12-07 17:41:57.046690] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:23.691 [2024-12-07 17:41:57.046705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.046711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.691 [2024-12-07 17:41:57.046719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:25:23.691 [2024-12-07 17:41:57.046725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.046749] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0c15aa6c-8f57-483a-a574-e9de90c611d1 00:25:23.691 [2024-12-07 17:41:57.047696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.047714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:23.691 [2024-12-07 17:41:57.047722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:23.691 [2024-12-07 17:41:57.047729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.052358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.052391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.691 [2024-12-07 17:41:57.052398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:25:23.691 [2024-12-07 17:41:57.052406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.052507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.052517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.691 [2024-12-07 17:41:57.052523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:23.691 [2024-12-07 17:41:57.052533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.052564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.052573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:23.691 [2024-12-07 17:41:57.052581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:23.691 [2024-12-07 17:41:57.052588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.052605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:23.691 [2024-12-07 17:41:57.055800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.055824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.691 [2024-12-07 17:41:57.055834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.199 ms 00:25:23.691 [2024-12-07 17:41:57.055841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.055868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.055874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:23.691 [2024-12-07 17:41:57.055882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:23.691 [2024-12-07 17:41:57.055887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.055908] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:23.691 [2024-12-07 17:41:57.056029] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:23.691 [2024-12-07 17:41:57.056042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:23.691 [2024-12-07 17:41:57.056050] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:23.691 [2024-12-07 17:41:57.056059] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056065] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056073] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:23.691 [2024-12-07 17:41:57.056079] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:23.691 [2024-12-07 17:41:57.056089] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:23.691 [2024-12-07 17:41:57.056094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:23.691 [2024-12-07 17:41:57.056101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.056111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:23.691 [2024-12-07 17:41:57.056119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:23.691 [2024-12-07 17:41:57.056124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.056196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.691 [2024-12-07 17:41:57.056203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:23.691 [2024-12-07 17:41:57.056210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:23.691 [2024-12-07 17:41:57.056216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.691 [2024-12-07 17:41:57.056294] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:23.691 [2024-12-07 17:41:57.056302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:23.691 [2024-12-07 17:41:57.056309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:23.691 [2024-12-07 17:41:57.056327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:23.691 [2024-12-07 17:41:57.056345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.691 [2024-12-07 17:41:57.056357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:23.691 [2024-12-07 17:41:57.056362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:23.691 [2024-12-07 17:41:57.056368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.691 [2024-12-07 17:41:57.056374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:23.691 [2024-12-07 17:41:57.056380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:23.691 [2024-12-07 17:41:57.056385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:23.691 [2024-12-07 17:41:57.056398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:23.691 [2024-12-07 17:41:57.056417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:23.691 [2024-12-07 17:41:57.056433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:23.691 [2024-12-07 17:41:57.056450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:23.691 [2024-12-07 17:41:57.056465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:23.691 [2024-12-07 17:41:57.056485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.691 [2024-12-07 17:41:57.056496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:23.691 [2024-12-07 17:41:57.056501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:23.691 [2024-12-07 17:41:57.056508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.691 [2024-12-07 17:41:57.056513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:23.691 [2024-12-07 17:41:57.056520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:23.691 [2024-12-07 17:41:57.056525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:23.691 [2024-12-07 17:41:57.056536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:23.691 [2024-12-07 17:41:57.056542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056546] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:23.691 [2024-12-07 17:41:57.056554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:23.691 [2024-12-07 17:41:57.056559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.691 [2024-12-07 17:41:57.056566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.691 [2024-12-07 17:41:57.056572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:23.691 [2024-12-07 17:41:57.056581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:23.691 [2024-12-07 17:41:57.056586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:23.691 [2024-12-07 17:41:57.056593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:23.691 [2024-12-07 17:41:57.056598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:23.692 [2024-12-07 17:41:57.056604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:23.692 [2024-12-07 17:41:57.056610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:23.692 [2024-12-07 17:41:57.056621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:23.692 [2024-12-07 17:41:57.056634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:23.692 [2024-12-07 17:41:57.056640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:23.692 [2024-12-07 17:41:57.056647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:23.692 [2024-12-07 17:41:57.056653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:23.692 [2024-12-07 17:41:57.056660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:23.692 [2024-12-07 17:41:57.056665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:23.692 [2024-12-07 17:41:57.056673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:23.692 [2024-12-07 17:41:57.056679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:23.692 [2024-12-07 17:41:57.056688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:23.692 [2024-12-07 17:41:57.056720] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:23.692 [2024-12-07 17:41:57.056727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.692 [2024-12-07 17:41:57.056741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:23.692 [2024-12-07 17:41:57.056747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:23.692 [2024-12-07 17:41:57.056754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:23.692 [2024-12-07 17:41:57.056760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.692 [2024-12-07 17:41:57.056767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:23.692 [2024-12-07 17:41:57.056773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:25:23.692 [2024-12-07 17:41:57.056780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.692 [2024-12-07 17:41:57.056818] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:23.692 [2024-12-07 17:41:57.056931] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:27.884 [2024-12-07 17:42:00.697507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.697842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:27.884 [2024-12-07 17:42:00.697930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3640.673 ms 00:25:27.884 [2024-12-07 17:42:00.697961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.729663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.729897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.884 [2024-12-07 17:42:00.730112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.357 ms 00:25:27.884 [2024-12-07 17:42:00.730160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.730317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.731032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:27.884 [2024-12-07 17:42:00.731089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:27.884 [2024-12-07 17:42:00.731122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.766275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.766459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.884 [2024-12-07 17:42:00.766942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.062 ms 00:25:27.884 [2024-12-07 17:42:00.767037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.767149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.767186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.884 [2024-12-07 17:42:00.767209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:27.884 [2024-12-07 17:42:00.767239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.767800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.767956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.884 [2024-12-07 17:42:00.768043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:25:27.884 [2024-12-07 17:42:00.768072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.768201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.768225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.884 [2024-12-07 17:42:00.768250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:27.884 [2024-12-07 17:42:00.768274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.785794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.884 [2024-12-07 17:42:00.785969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.884 [2024-12-07 17:42:00.786105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.436 ms 00:25:27.884 [2024-12-07 17:42:00.786136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.884 [2024-12-07 17:42:00.816957] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:27.884 [2024-12-07 17:42:00.820886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.821054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:27.885 [2024-12-07 17:42:00.821078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.628 ms 00:25:27.885 [2024-12-07 17:42:00.821087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:00.914708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.914914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:27.885 [2024-12-07 17:42:00.914941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.575 ms 00:25:27.885 [2024-12-07 17:42:00.914951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:00.915176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.915193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:27.885 [2024-12-07 17:42:00.915208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:25:27.885 [2024-12-07 17:42:00.915217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:00.941176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.941345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:27.885 [2024-12-07 17:42:00.941372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.901 ms 00:25:27.885 [2024-12-07 17:42:00.941382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:00.966085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.966131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:27.885 [2024-12-07 17:42:00.966147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.653 ms 00:25:27.885 [2024-12-07 17:42:00.966156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:00.966756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:00.966775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:27.885 [2024-12-07 17:42:00.966786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:25:27.885 [2024-12-07 17:42:00.966797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.053951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.054021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:27.885 [2024-12-07 17:42:01.054041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.109 ms 00:25:27.885 [2024-12-07 17:42:01.054050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.081772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.081821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:27.885 [2024-12-07 17:42:01.081837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.628 ms 00:25:27.885 [2024-12-07 17:42:01.081845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.107738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.107783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:27.885 [2024-12-07 17:42:01.107798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.837 ms 00:25:27.885 [2024-12-07 17:42:01.107806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.133517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.133564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:27.885 [2024-12-07 17:42:01.133579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.659 ms 00:25:27.885 [2024-12-07 17:42:01.133588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.133641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.133652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:27.885 [2024-12-07 17:42:01.133666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:27.885 [2024-12-07 17:42:01.133674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.133767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.885 [2024-12-07 17:42:01.133781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:27.885 [2024-12-07 17:42:01.133792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:27.885 [2024-12-07 17:42:01.133800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.885 [2024-12-07 17:42:01.135012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4088.562 ms, result 0 00:25:27.885 { 00:25:27.885 "name": "ftl0", 00:25:27.885 "uuid": "0c15aa6c-8f57-483a-a574-e9de90c611d1" 00:25:27.885 } 00:25:27.885 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:27.885 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:28.145 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:28.145 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:28.145 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:28.406 /dev/nbd0 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:28.406 1+0 records in 00:25:28.406 1+0 records out 00:25:28.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446029 s, 9.2 MB/s 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:28.406 17:42:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:28.406 [2024-12-07 17:42:01.724113] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:28.406 [2024-12-07 17:42:01.724476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80281 ] 00:25:28.667 [2024-12-07 17:42:01.892413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.667 [2024-12-07 17:42:02.013311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.055  [2024-12-07T17:42:04.381Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-07T17:42:05.316Z] Copying: 380/1024 [MB] (190 MBps) [2024-12-07T17:42:06.354Z] Copying: 633/1024 [MB] (253 MBps) [2024-12-07T17:42:06.936Z] Copying: 886/1024 [MB] (252 MBps) [2024-12-07T17:42:07.504Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:25:34.122 00:25:34.122 17:42:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:36.033 17:42:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:36.033 [2024-12-07 17:42:09.365717] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:25:36.033 [2024-12-07 17:42:09.365947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80358 ] 00:25:36.293 [2024-12-07 17:42:09.515585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.293 [2024-12-07 17:42:09.590609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.669  [2024-12-07T17:42:11.986Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-07T17:42:12.920Z] Copying: 48/1024 [MB] (30 MBps) [2024-12-07T17:42:13.855Z] Copying: 69/1024 [MB] (21 MBps) [2024-12-07T17:42:14.789Z] Copying: 91/1024 [MB] (21 MBps) [2024-12-07T17:42:16.158Z] Copying: 111/1024 [MB] (20 MBps) [2024-12-07T17:42:17.089Z] Copying: 133/1024 [MB] (22 MBps) [2024-12-07T17:42:18.048Z] Copying: 167/1024 [MB] (34 MBps) [2024-12-07T17:42:18.981Z] Copying: 202/1024 [MB] (34 MBps) [2024-12-07T17:42:19.914Z] Copying: 227/1024 [MB] (25 MBps) [2024-12-07T17:42:20.854Z] Copying: 246/1024 [MB] (18 MBps) [2024-12-07T17:42:21.795Z] Copying: 266/1024 [MB] (19 MBps) [2024-12-07T17:42:23.178Z] Copying: 283/1024 [MB] (17 MBps) [2024-12-07T17:42:24.124Z] Copying: 302/1024 [MB] (18 MBps) [2024-12-07T17:42:25.067Z] Copying: 319/1024 [MB] (16 MBps) [2024-12-07T17:42:25.999Z] Copying: 332/1024 [MB] (12 MBps) [2024-12-07T17:42:26.934Z] Copying: 361/1024 [MB] (29 MBps) [2024-12-07T17:42:27.873Z] Copying: 395/1024 [MB] (34 MBps) [2024-12-07T17:42:28.811Z] Copying: 417/1024 [MB] (21 MBps) [2024-12-07T17:42:30.196Z] Copying: 434/1024 [MB] (16 MBps) [2024-12-07T17:42:31.133Z] Copying: 450/1024 [MB] (15 MBps) [2024-12-07T17:42:32.073Z] Copying: 473/1024 [MB] (23 MBps) [2024-12-07T17:42:33.016Z] Copying: 505/1024 [MB] (31 MBps) [2024-12-07T17:42:33.953Z] Copying: 521/1024 [MB] (16 MBps) [2024-12-07T17:42:34.903Z] Copying: 554/1024 [MB] (33 MBps) [2024-12-07T17:42:35.837Z] Copying: 572/1024 [MB] (17 MBps) [2024-12-07T17:42:36.782Z] Copying: 601/1024 [MB] (29 MBps) [2024-12-07T17:42:38.227Z] Copying: 623/1024 [MB] (21 MBps) [2024-12-07T17:42:38.856Z] Copying: 643/1024 [MB] (19 MBps) [2024-12-07T17:42:39.792Z] Copying: 663/1024 [MB] (20 MBps) [2024-12-07T17:42:41.174Z] Copying: 697/1024 [MB] (34 MBps) [2024-12-07T17:42:42.118Z] Copying: 719/1024 [MB] (21 MBps) [2024-12-07T17:42:43.063Z] Copying: 735/1024 [MB] (16 MBps) [2024-12-07T17:42:44.007Z] Copying: 755/1024 [MB] (20 MBps) [2024-12-07T17:42:44.953Z] Copying: 772/1024 [MB] (17 MBps) [2024-12-07T17:42:45.892Z] Copying: 790/1024 [MB] (18 MBps) [2024-12-07T17:42:46.822Z] Copying: 821/1024 [MB] (30 MBps) [2024-12-07T17:42:48.205Z] Copying: 850/1024 [MB] (29 MBps) [2024-12-07T17:42:48.778Z] Copying: 868/1024 [MB] (18 MBps) [2024-12-07T17:42:50.158Z] Copying: 888/1024 [MB] (19 MBps) [2024-12-07T17:42:51.103Z] Copying: 922/1024 [MB] (33 MBps) [2024-12-07T17:42:52.047Z] Copying: 944/1024 [MB] (21 MBps) [2024-12-07T17:42:52.983Z] Copying: 960/1024 [MB] (16 MBps) [2024-12-07T17:42:53.918Z] Copying: 987/1024 [MB] (27 MBps) [2024-12-07T17:42:53.918Z] Copying: 1021/1024 [MB] (33 MBps) [2024-12-07T17:42:54.486Z] Copying: 1024/1024 [MB] (average 23 MBps) 00:26:21.104 00:26:21.104 17:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:21.104 17:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:21.363 17:42:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:21.622 [2024-12-07 17:42:54.824771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.622 [2024-12-07 17:42:54.824809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:21.623 [2024-12-07 17:42:54.824820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:21.623 [2024-12-07 17:42:54.824829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.824848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:21.623 [2024-12-07 17:42:54.827026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.827052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:21.623 [2024-12-07 17:42:54.827062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:26:21.623 [2024-12-07 17:42:54.827069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.828972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.829005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:21.623 [2024-12-07 17:42:54.829015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.880 ms 00:26:21.623 [2024-12-07 17:42:54.829021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.843492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.843519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:21.623 [2024-12-07 17:42:54.843529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.452 ms 00:26:21.623 [2024-12-07 17:42:54.843535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.848415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.848439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:21.623 [2024-12-07 17:42:54.848449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:26:21.623 [2024-12-07 17:42:54.848456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.867812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.867840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:21.623 [2024-12-07 17:42:54.867850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.310 ms 00:26:21.623 [2024-12-07 17:42:54.867856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.879589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.879617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:21.623 [2024-12-07 17:42:54.879629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.701 ms 00:26:21.623 [2024-12-07 17:42:54.879636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.879740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.879748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:21.623 [2024-12-07 17:42:54.879756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:21.623 [2024-12-07 17:42:54.879762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.897505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.897530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:21.623 [2024-12-07 17:42:54.897540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.728 ms 00:26:21.623 [2024-12-07 17:42:54.897546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.915099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.915123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:21.623 [2024-12-07 17:42:54.915132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.524 ms 00:26:21.623 [2024-12-07 17:42:54.915137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.931907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.931933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:21.623 [2024-12-07 17:42:54.931942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.739 ms 00:26:21.623 [2024-12-07 17:42:54.931947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.948732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.623 [2024-12-07 17:42:54.948758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:21.623 [2024-12-07 17:42:54.948768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.710 ms 00:26:21.623 [2024-12-07 17:42:54.948774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.623 [2024-12-07 17:42:54.948801] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:21.623 [2024-12-07 17:42:54.948812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.948997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:21.623 [2024-12-07 17:42:54.949178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:21.624 [2024-12-07 17:42:54.949528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:21.624 [2024-12-07 17:42:54.949536] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c15aa6c-8f57-483a-a574-e9de90c611d1 00:26:21.624 [2024-12-07 17:42:54.949542] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:21.624 [2024-12-07 17:42:54.949550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:21.624 [2024-12-07 17:42:54.949557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:21.624 [2024-12-07 17:42:54.949564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:21.624 [2024-12-07 17:42:54.949569] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:21.624 [2024-12-07 17:42:54.949576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:21.624 [2024-12-07 17:42:54.949581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:21.624 [2024-12-07 17:42:54.949587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:21.624 [2024-12-07 17:42:54.949592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:21.624 [2024-12-07 17:42:54.949599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.624 [2024-12-07 17:42:54.949604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:21.624 [2024-12-07 17:42:54.949612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:26:21.624 [2024-12-07 17:42:54.949617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.959145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.624 [2024-12-07 17:42:54.959170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:21.624 [2024-12-07 17:42:54.959179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.502 ms 00:26:21.624 [2024-12-07 17:42:54.959185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.959453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.624 [2024-12-07 17:42:54.959462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:21.624 [2024-12-07 17:42:54.959469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:26:21.624 [2024-12-07 17:42:54.959475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.992242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.624 [2024-12-07 17:42:54.992271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.624 [2024-12-07 17:42:54.992281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.624 [2024-12-07 17:42:54.992287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.992333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.624 [2024-12-07 17:42:54.992339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.624 [2024-12-07 17:42:54.992346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.624 [2024-12-07 17:42:54.992351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.992404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.624 [2024-12-07 17:42:54.992413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.624 [2024-12-07 17:42:54.992420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.624 [2024-12-07 17:42:54.992426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.624 [2024-12-07 17:42:54.992441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.624 [2024-12-07 17:42:54.992448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.624 [2024-12-07 17:42:54.992455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.624 [2024-12-07 17:42:54.992460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.883 [2024-12-07 17:42:55.051452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.883 [2024-12-07 17:42:55.051575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.883 [2024-12-07 17:42:55.051591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.883 [2024-12-07 17:42:55.051597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.883 [2024-12-07 17:42:55.099843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.883 [2024-12-07 17:42:55.099875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.883 [2024-12-07 17:42:55.099886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.883 [2024-12-07 17:42:55.099892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.883 [2024-12-07 17:42:55.099997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.883 [2024-12-07 17:42:55.100005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.884 [2024-12-07 17:42:55.100016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.884 [2024-12-07 17:42:55.100068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.884 [2024-12-07 17:42:55.100075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.884 [2024-12-07 17:42:55.100161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.884 [2024-12-07 17:42:55.100169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.884 [2024-12-07 17:42:55.100209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:21.884 [2024-12-07 17:42:55.100217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.884 [2024-12-07 17:42:55.100259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.884 [2024-12-07 17:42:55.100267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.884 [2024-12-07 17:42:55.100316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.884 [2024-12-07 17:42:55.100324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.884 [2024-12-07 17:42:55.100330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.884 [2024-12-07 17:42:55.100429] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.629 ms, result 0 00:26:21.884 true 00:26:21.884 17:42:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80128 00:26:21.884 17:42:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80128 00:26:21.884 17:42:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:21.884 [2024-12-07 17:42:55.185996] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:26:21.884 [2024-12-07 17:42:55.186112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80838 ] 00:26:22.161 [2024-12-07 17:42:55.342650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.161 [2024-12-07 17:42:55.417754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.535  [2024-12-07T17:42:57.849Z] Copying: 256/1024 [MB] (256 MBps) [2024-12-07T17:42:58.782Z] Copying: 516/1024 [MB] (259 MBps) [2024-12-07T17:42:59.715Z] Copying: 773/1024 [MB] (257 MBps) [2024-12-07T17:43:00.283Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:26:26.901 00:26:26.901 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80128 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:26.901 17:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:26.901 [2024-12-07 17:43:00.227350] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:26:26.901 [2024-12-07 17:43:00.227484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80894 ] 00:26:27.160 [2024-12-07 17:43:00.387135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.160 [2024-12-07 17:43:00.461324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.419 [2024-12-07 17:43:00.673396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.419 [2024-12-07 17:43:00.673464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.419 [2024-12-07 17:43:00.736143] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:27.419 [2024-12-07 17:43:00.736440] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:27.419 [2024-12-07 17:43:00.736641] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:27.679 [2024-12-07 17:43:00.950518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.679 [2024-12-07 17:43:00.950555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.679 [2024-12-07 17:43:00.950566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:27.680 [2024-12-07 17:43:00.950574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.950607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.950615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.680 [2024-12-07 17:43:00.950622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:27.680 [2024-12-07 17:43:00.950628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.950640] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.680 [2024-12-07 17:43:00.951241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.680 [2024-12-07 17:43:00.951257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.951264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.680 [2024-12-07 17:43:00.951271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:26:27.680 [2024-12-07 17:43:00.951277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.952218] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.680 [2024-12-07 17:43:00.961974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.962014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.680 [2024-12-07 17:43:00.962023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.757 ms 00:26:27.680 [2024-12-07 17:43:00.962030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.962071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.962079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.680 [2024-12-07 17:43:00.962086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:27.680 [2024-12-07 17:43:00.962091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.966411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.966437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.680 [2024-12-07 17:43:00.966444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:26:27.680 [2024-12-07 17:43:00.966449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.966503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.966509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.680 [2024-12-07 17:43:00.966515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:27.680 [2024-12-07 17:43:00.966521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.966558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.966565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.680 [2024-12-07 17:43:00.966571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.680 [2024-12-07 17:43:00.966577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.966591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.680 [2024-12-07 17:43:00.969214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.969333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.680 [2024-12-07 17:43:00.969345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.628 ms 00:26:27.680 [2024-12-07 17:43:00.969351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.969383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.969390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.680 [2024-12-07 17:43:00.969397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:27.680 [2024-12-07 17:43:00.969402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.969417] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.680 [2024-12-07 17:43:00.969441] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.680 [2024-12-07 17:43:00.969467] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.680 [2024-12-07 17:43:00.969479] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:27.680 [2024-12-07 17:43:00.969557] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.680 [2024-12-07 17:43:00.969565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.680 [2024-12-07 17:43:00.969573] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:27.680 [2024-12-07 17:43:00.969582] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969589] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.680 [2024-12-07 17:43:00.969601] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.680 [2024-12-07 17:43:00.969606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.680 [2024-12-07 17:43:00.969612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.680 [2024-12-07 17:43:00.969618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.969623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.680 [2024-12-07 17:43:00.969629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:26:27.680 [2024-12-07 17:43:00.969634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.969702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.680 [2024-12-07 17:43:00.969710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.680 [2024-12-07 17:43:00.969716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:27.680 [2024-12-07 17:43:00.969721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.680 [2024-12-07 17:43:00.969796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.680 [2024-12-07 17:43:00.969804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.680 [2024-12-07 17:43:00.969810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.680 [2024-12-07 17:43:00.969826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.680 [2024-12-07 17:43:00.969843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.680 [2024-12-07 17:43:00.969858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.680 [2024-12-07 17:43:00.969863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.680 [2024-12-07 17:43:00.969869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.680 [2024-12-07 17:43:00.969875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.680 [2024-12-07 17:43:00.969880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.680 [2024-12-07 17:43:00.969885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:27.680 [2024-12-07 17:43:00.969895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:27.680 [2024-12-07 17:43:00.969910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:27.680 [2024-12-07 17:43:00.969926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:27.680 [2024-12-07 17:43:00.969941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:27.680 [2024-12-07 17:43:00.969955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.680 [2024-12-07 17:43:00.969964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:27.680 [2024-12-07 17:43:00.969969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:27.680 [2024-12-07 17:43:00.969974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.680 [2024-12-07 17:43:00.969995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:27.680 [2024-12-07 17:43:00.970001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:27.680 [2024-12-07 17:43:00.970006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.680 [2024-12-07 17:43:00.970011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:27.680 [2024-12-07 17:43:00.970016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:27.681 [2024-12-07 17:43:00.970021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.681 [2024-12-07 17:43:00.970026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:27.681 [2024-12-07 17:43:00.970032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:27.681 [2024-12-07 17:43:00.970036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.681 [2024-12-07 17:43:00.970042] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:27.681 [2024-12-07 17:43:00.970049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:27.681 [2024-12-07 17:43:00.970056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.681 [2024-12-07 17:43:00.970062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.681 [2024-12-07 17:43:00.970068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:27.681 [2024-12-07 17:43:00.970073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:27.681 [2024-12-07 17:43:00.970078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:27.681 [2024-12-07 17:43:00.970083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:27.681 [2024-12-07 17:43:00.970088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:27.681 [2024-12-07 17:43:00.970093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:27.681 [2024-12-07 17:43:00.970099] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:27.681 [2024-12-07 17:43:00.970106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:27.681 [2024-12-07 17:43:00.970118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:27.681 [2024-12-07 17:43:00.970123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:27.681 [2024-12-07 17:43:00.970129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:27.681 [2024-12-07 17:43:00.970134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:27.681 [2024-12-07 17:43:00.970139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:27.681 [2024-12-07 17:43:00.970145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:27.681 [2024-12-07 17:43:00.970150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:27.681 [2024-12-07 17:43:00.970156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:27.681 [2024-12-07 17:43:00.970161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:27.681 [2024-12-07 17:43:00.970188] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:27.681 [2024-12-07 17:43:00.970193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:27.681 [2024-12-07 17:43:00.970205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:27.681 [2024-12-07 17:43:00.970211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:27.681 [2024-12-07 17:43:00.970217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:27.681 [2024-12-07 17:43:00.970222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:00.970228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:27.681 [2024-12-07 17:43:00.970234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:26:27.681 [2024-12-07 17:43:00.970239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:00.990955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:00.990998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.681 [2024-12-07 17:43:00.991007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.683 ms 00:26:27.681 [2024-12-07 17:43:00.991013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:00.991081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:00.991088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:27.681 [2024-12-07 17:43:00.991094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:27.681 [2024-12-07 17:43:00.991100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:01.043325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:01.043361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.681 [2024-12-07 17:43:01.043373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.183 ms 00:26:27.681 [2024-12-07 17:43:01.043380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:01.043420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:01.043427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.681 [2024-12-07 17:43:01.043434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:27.681 [2024-12-07 17:43:01.043439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:01.043756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:01.043769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.681 [2024-12-07 17:43:01.043777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:26:27.681 [2024-12-07 17:43:01.043788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:01.043887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:01.043893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.681 [2024-12-07 17:43:01.043900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:27.681 [2024-12-07 17:43:01.043905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.681 [2024-12-07 17:43:01.054465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.681 [2024-12-07 17:43:01.054565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.681 [2024-12-07 17:43:01.054851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.544 ms 00:26:27.681 [2024-12-07 17:43:01.054886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.064847] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:27.940 [2024-12-07 17:43:01.064953] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:27.940 [2024-12-07 17:43:01.065020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.065037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:27.940 [2024-12-07 17:43:01.065053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 00:26:27.940 [2024-12-07 17:43:01.065067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.083475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.083592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:27.940 [2024-12-07 17:43:01.083638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.370 ms 00:26:27.940 [2024-12-07 17:43:01.083656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.092680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.092765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:27.940 [2024-12-07 17:43:01.092837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:26:27.940 [2024-12-07 17:43:01.092854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.101588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.101671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:27.940 [2024-12-07 17:43:01.101709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.703 ms 00:26:27.940 [2024-12-07 17:43:01.101725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.102214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.102286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:27.940 [2024-12-07 17:43:01.102554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:26:27.940 [2024-12-07 17:43:01.102588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.146327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.146503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:27.940 [2024-12-07 17:43:01.146815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.670 ms 00:26:27.940 [2024-12-07 17:43:01.146833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.154882] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:27.940 [2024-12-07 17:43:01.156816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.156893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:27.940 [2024-12-07 17:43:01.156933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.904 ms 00:26:27.940 [2024-12-07 17:43:01.156956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.157065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.157089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:27.940 [2024-12-07 17:43:01.157104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:27.940 [2024-12-07 17:43:01.157158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.157236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.940 [2024-12-07 17:43:01.157335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:27.940 [2024-12-07 17:43:01.157353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:27.940 [2024-12-07 17:43:01.157368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.940 [2024-12-07 17:43:01.157399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.941 [2024-12-07 17:43:01.157416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:27.941 [2024-12-07 17:43:01.157481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.941 [2024-12-07 17:43:01.157498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.941 [2024-12-07 17:43:01.157536] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:27.941 [2024-12-07 17:43:01.157554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.941 [2024-12-07 17:43:01.157596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:27.941 [2024-12-07 17:43:01.157614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:27.941 [2024-12-07 17:43:01.157631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.941 [2024-12-07 17:43:01.175179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.941 [2024-12-07 17:43:01.175290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:27.941 [2024-12-07 17:43:01.175333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.522 ms 00:26:27.941 [2024-12-07 17:43:01.175351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.941 [2024-12-07 17:43:01.175412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.941 [2024-12-07 17:43:01.175596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:27.941 [2024-12-07 17:43:01.175634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:27.941 [2024-12-07 17:43:01.175650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.941 [2024-12-07 17:43:01.176471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.619 ms, result 0 00:26:28.881  [2024-12-07T17:43:03.209Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-07T17:43:04.592Z] Copying: 42/1024 [MB] (13 MBps) [2024-12-07T17:43:05.529Z] Copying: 56/1024 [MB] (13 MBps) [2024-12-07T17:43:06.473Z] Copying: 76/1024 [MB] (20 MBps) [2024-12-07T17:43:07.416Z] Copying: 92/1024 [MB] (16 MBps) [2024-12-07T17:43:08.385Z] Copying: 109/1024 [MB] (16 MBps) [2024-12-07T17:43:09.434Z] Copying: 131/1024 [MB] (21 MBps) [2024-12-07T17:43:10.380Z] Copying: 148/1024 [MB] (17 MBps) [2024-12-07T17:43:11.326Z] Copying: 166/1024 [MB] (18 MBps) [2024-12-07T17:43:12.271Z] Copying: 188/1024 [MB] (21 MBps) [2024-12-07T17:43:13.216Z] Copying: 207/1024 [MB] (19 MBps) [2024-12-07T17:43:14.603Z] Copying: 223/1024 [MB] (16 MBps) [2024-12-07T17:43:15.552Z] Copying: 240/1024 [MB] (16 MBps) [2024-12-07T17:43:16.492Z] Copying: 251/1024 [MB] (10 MBps) [2024-12-07T17:43:17.438Z] Copying: 265/1024 [MB] (13 MBps) [2024-12-07T17:43:18.385Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-07T17:43:19.332Z] Copying: 288/1024 [MB] (12 MBps) [2024-12-07T17:43:20.276Z] Copying: 300/1024 [MB] (12 MBps) [2024-12-07T17:43:21.221Z] Copying: 319/1024 [MB] (18 MBps) [2024-12-07T17:43:22.660Z] Copying: 338/1024 [MB] (19 MBps) [2024-12-07T17:43:23.233Z] Copying: 353/1024 [MB] (14 MBps) [2024-12-07T17:43:24.622Z] Copying: 375/1024 [MB] (21 MBps) [2024-12-07T17:43:25.194Z] Copying: 393/1024 [MB] (18 MBps) [2024-12-07T17:43:26.581Z] Copying: 410/1024 [MB] (17 MBps) [2024-12-07T17:43:27.525Z] Copying: 423/1024 [MB] (13 MBps) [2024-12-07T17:43:28.472Z] Copying: 438/1024 [MB] (14 MBps) [2024-12-07T17:43:29.416Z] Copying: 449/1024 [MB] (10 MBps) [2024-12-07T17:43:30.362Z] Copying: 459/1024 [MB] (10 MBps) [2024-12-07T17:43:31.306Z] Copying: 475/1024 [MB] (15 MBps) [2024-12-07T17:43:32.252Z] Copying: 492/1024 [MB] (17 MBps) [2024-12-07T17:43:33.216Z] Copying: 502/1024 [MB] (10 MBps) [2024-12-07T17:43:34.604Z] Copying: 520/1024 [MB] (17 MBps) [2024-12-07T17:43:35.548Z] Copying: 536/1024 [MB] (15 MBps) [2024-12-07T17:43:36.489Z] Copying: 559/1024 [MB] (22 MBps) [2024-12-07T17:43:37.430Z] Copying: 570/1024 [MB] (11 MBps) [2024-12-07T17:43:38.375Z] Copying: 584/1024 [MB] (14 MBps) [2024-12-07T17:43:39.321Z] Copying: 596/1024 [MB] (11 MBps) [2024-12-07T17:43:40.267Z] Copying: 613/1024 [MB] (16 MBps) [2024-12-07T17:43:41.286Z] Copying: 628/1024 [MB] (15 MBps) [2024-12-07T17:43:42.231Z] Copying: 649/1024 [MB] (20 MBps) [2024-12-07T17:43:43.616Z] Copying: 674796/1048576 [kB] (10204 kBps) [2024-12-07T17:43:44.560Z] Copying: 685020/1048576 [kB] (10224 kBps) [2024-12-07T17:43:45.504Z] Copying: 678/1024 [MB] (10 MBps) [2024-12-07T17:43:46.447Z] Copying: 688/1024 [MB] (10 MBps) [2024-12-07T17:43:47.390Z] Copying: 701/1024 [MB] (12 MBps) [2024-12-07T17:43:48.335Z] Copying: 713/1024 [MB] (12 MBps) [2024-12-07T17:43:49.280Z] Copying: 727/1024 [MB] (13 MBps) [2024-12-07T17:43:50.226Z] Copying: 741/1024 [MB] (13 MBps) [2024-12-07T17:43:51.616Z] Copying: 754/1024 [MB] (13 MBps) [2024-12-07T17:43:52.563Z] Copying: 768/1024 [MB] (13 MBps) [2024-12-07T17:43:53.506Z] Copying: 788/1024 [MB] (19 MBps) [2024-12-07T17:43:54.449Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-07T17:43:55.393Z] Copying: 815/1024 [MB] (17 MBps) [2024-12-07T17:43:56.335Z] Copying: 831/1024 [MB] (16 MBps) [2024-12-07T17:43:57.283Z] Copying: 847/1024 [MB] (15 MBps) [2024-12-07T17:43:58.231Z] Copying: 858/1024 [MB] (11 MBps) [2024-12-07T17:43:59.603Z] Copying: 872/1024 [MB] (14 MBps) [2024-12-07T17:44:00.545Z] Copying: 898/1024 [MB] (25 MBps) [2024-12-07T17:44:01.484Z] Copying: 924/1024 [MB] (25 MBps) [2024-12-07T17:44:02.422Z] Copying: 948/1024 [MB] (24 MBps) [2024-12-07T17:44:03.362Z] Copying: 963/1024 [MB] (15 MBps) [2024-12-07T17:44:04.306Z] Copying: 984/1024 [MB] (20 MBps) [2024-12-07T17:44:05.248Z] Copying: 1000/1024 [MB] (16 MBps) [2024-12-07T17:44:06.626Z] Copying: 1014/1024 [MB] (14 MBps) [2024-12-07T17:44:06.626Z] Copying: 1048224/1048576 [kB] (8976 kBps) [2024-12-07T17:44:06.626Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-07 17:44:06.528457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.244 [2024-12-07 17:44:06.528534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.244 [2024-12-07 17:44:06.528554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:33.244 [2024-12-07 17:44:06.528564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.244 [2024-12-07 17:44:06.529634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.244 [2024-12-07 17:44:06.535192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.244 [2024-12-07 17:44:06.535244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.244 [2024-12-07 17:44:06.535256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.515 ms 00:27:33.244 [2024-12-07 17:44:06.535274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.245 [2024-12-07 17:44:06.548878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.245 [2024-12-07 17:44:06.548929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.245 [2024-12-07 17:44:06.548942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.400 ms 00:27:33.245 [2024-12-07 17:44:06.548951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.245 [2024-12-07 17:44:06.572783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.245 [2024-12-07 17:44:06.572972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.245 [2024-12-07 17:44:06.573008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.814 ms 00:27:33.245 [2024-12-07 17:44:06.573017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.245 [2024-12-07 17:44:06.579180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.245 [2024-12-07 17:44:06.579334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.245 [2024-12-07 17:44:06.579352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:27:33.245 [2024-12-07 17:44:06.579361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.245 [2024-12-07 17:44:06.605934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.245 [2024-12-07 17:44:06.605996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.245 [2024-12-07 17:44:06.606009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.522 ms 00:27:33.245 [2024-12-07 17:44:06.606017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.245 [2024-12-07 17:44:06.621923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.245 [2024-12-07 17:44:06.622117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.245 [2024-12-07 17:44:06.622139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.859 ms 00:27:33.245 [2024-12-07 17:44:06.622149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.505 [2024-12-07 17:44:06.856593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.505 [2024-12-07 17:44:06.856659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.505 [2024-12-07 17:44:06.856683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 234.333 ms 00:27:33.505 [2024-12-07 17:44:06.856691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.505 [2024-12-07 17:44:06.883045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.505 [2024-12-07 17:44:06.883092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.505 [2024-12-07 17:44:06.883104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.337 ms 00:27:33.505 [2024-12-07 17:44:06.883123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.766 [2024-12-07 17:44:06.909064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.766 [2024-12-07 17:44:06.909110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.766 [2024-12-07 17:44:06.909122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.895 ms 00:27:33.766 [2024-12-07 17:44:06.909129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.766 [2024-12-07 17:44:06.934036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.766 [2024-12-07 17:44:06.934078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.766 [2024-12-07 17:44:06.934089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.862 ms 00:27:33.766 [2024-12-07 17:44:06.934096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.766 [2024-12-07 17:44:06.959218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.766 [2024-12-07 17:44:06.959257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.766 [2024-12-07 17:44:06.959268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.050 ms 00:27:33.766 [2024-12-07 17:44:06.959275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.766 [2024-12-07 17:44:06.959317] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.766 [2024-12-07 17:44:06.959333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104448 / 261120 wr_cnt: 1 state: open 00:27:33.766 [2024-12-07 17:44:06.959344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.766 [2024-12-07 17:44:06.959476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.959978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.767 [2024-12-07 17:44:06.960154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.767 [2024-12-07 17:44:06.960162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c15aa6c-8f57-483a-a574-e9de90c611d1 00:27:33.767 [2024-12-07 17:44:06.960184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104448 00:27:33.767 [2024-12-07 17:44:06.960192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105408 00:27:33.767 [2024-12-07 17:44:06.960200] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104448 00:27:33.767 [2024-12-07 17:44:06.960209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:27:33.767 [2024-12-07 17:44:06.960216] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.767 [2024-12-07 17:44:06.960225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.767 [2024-12-07 17:44:06.960233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.767 [2024-12-07 17:44:06.960240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.768 [2024-12-07 17:44:06.960247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.768 [2024-12-07 17:44:06.960255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.768 [2024-12-07 17:44:06.960263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.768 [2024-12-07 17:44:06.960272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:27:33.768 [2024-12-07 17:44:06.960280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:06.974057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.768 [2024-12-07 17:44:06.974096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.768 [2024-12-07 17:44:06.974106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.744 ms 00:27:33.768 [2024-12-07 17:44:06.974114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:06.974499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.768 [2024-12-07 17:44:06.974509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.768 [2024-12-07 17:44:06.974524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:27:33.768 [2024-12-07 17:44:06.974532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:07.011102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.768 [2024-12-07 17:44:07.011144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.768 [2024-12-07 17:44:07.011156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.768 [2024-12-07 17:44:07.011165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:07.011231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.768 [2024-12-07 17:44:07.011242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.768 [2024-12-07 17:44:07.011258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.768 [2024-12-07 17:44:07.011268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:07.011337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.768 [2024-12-07 17:44:07.011349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.768 [2024-12-07 17:44:07.011358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.768 [2024-12-07 17:44:07.011367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:07.011383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.768 [2024-12-07 17:44:07.011393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.768 [2024-12-07 17:44:07.011402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.768 [2024-12-07 17:44:07.011411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.768 [2024-12-07 17:44:07.096300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.768 [2024-12-07 17:44:07.096353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.768 [2024-12-07 17:44:07.096366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.768 [2024-12-07 17:44:07.096374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.028 [2024-12-07 17:44:07.165715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.028 [2024-12-07 17:44:07.165762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.028 [2024-12-07 17:44:07.165774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.028 [2024-12-07 17:44:07.165788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.028 [2024-12-07 17:44:07.165874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.028 [2024-12-07 17:44:07.165885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.028 [2024-12-07 17:44:07.165895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.028 [2024-12-07 17:44:07.165903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.028 [2024-12-07 17:44:07.165939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.028 [2024-12-07 17:44:07.165948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.028 [2024-12-07 17:44:07.165957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.028 [2024-12-07 17:44:07.165965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.028 [2024-12-07 17:44:07.166081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.028 [2024-12-07 17:44:07.166093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.028 [2024-12-07 17:44:07.166102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.028 [2024-12-07 17:44:07.166110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.028 [2024-12-07 17:44:07.166143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.028 [2024-12-07 17:44:07.166154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.028 [2024-12-07 17:44:07.166162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.028 [2024-12-07 17:44:07.166170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.029 [2024-12-07 17:44:07.166216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.029 [2024-12-07 17:44:07.166226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.029 [2024-12-07 17:44:07.166234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.029 [2024-12-07 17:44:07.166242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.029 [2024-12-07 17:44:07.166290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.029 [2024-12-07 17:44:07.166300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.029 [2024-12-07 17:44:07.166309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.029 [2024-12-07 17:44:07.166317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.029 [2024-12-07 17:44:07.166454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.926 ms, result 0 00:27:35.447 00:27:35.447 00:27:35.447 17:44:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:37.362 17:44:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:37.362 [2024-12-07 17:44:10.741342] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:27:37.362 [2024-12-07 17:44:10.741483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81607 ] 00:27:37.622 [2024-12-07 17:44:10.898226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.622 [2024-12-07 17:44:10.977394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.885 [2024-12-07 17:44:11.187330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.885 [2024-12-07 17:44:11.187382] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:38.145 [2024-12-07 17:44:11.343715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.145 [2024-12-07 17:44:11.343766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:38.145 [2024-12-07 17:44:11.343780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:38.145 [2024-12-07 17:44:11.343788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.145 [2024-12-07 17:44:11.343837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.145 [2024-12-07 17:44:11.343849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:38.145 [2024-12-07 17:44:11.343858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:38.145 [2024-12-07 17:44:11.343865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.145 [2024-12-07 17:44:11.343884] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:38.145 [2024-12-07 17:44:11.344626] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:38.145 [2024-12-07 17:44:11.344651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.145 [2024-12-07 17:44:11.344659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:38.145 [2024-12-07 17:44:11.344668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:27:38.145 [2024-12-07 17:44:11.344676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.145 [2024-12-07 17:44:11.345903] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:38.146 [2024-12-07 17:44:11.359344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.359389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:38.146 [2024-12-07 17:44:11.359401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.442 ms 00:27:38.146 [2024-12-07 17:44:11.359414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.359485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.359495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:38.146 [2024-12-07 17:44:11.359503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:38.146 [2024-12-07 17:44:11.359511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.365508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.365542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:38.146 [2024-12-07 17:44:11.365554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.914 ms 00:27:38.146 [2024-12-07 17:44:11.365566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.365639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.365648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:38.146 [2024-12-07 17:44:11.365657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:38.146 [2024-12-07 17:44:11.365664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.365702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.365712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:38.146 [2024-12-07 17:44:11.365720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:38.146 [2024-12-07 17:44:11.365728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.365752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:38.146 [2024-12-07 17:44:11.369193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.369226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:38.146 [2024-12-07 17:44:11.369238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.446 ms 00:27:38.146 [2024-12-07 17:44:11.369246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.369279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.369287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:38.146 [2024-12-07 17:44:11.369295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:38.146 [2024-12-07 17:44:11.369302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.369322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:38.146 [2024-12-07 17:44:11.369342] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:38.146 [2024-12-07 17:44:11.369377] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:38.146 [2024-12-07 17:44:11.369396] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:38.146 [2024-12-07 17:44:11.369523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:38.146 [2024-12-07 17:44:11.369534] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:38.146 [2024-12-07 17:44:11.369544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:38.146 [2024-12-07 17:44:11.369554] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:38.146 [2024-12-07 17:44:11.369563] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:38.146 [2024-12-07 17:44:11.369571] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:38.146 [2024-12-07 17:44:11.369578] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:38.146 [2024-12-07 17:44:11.369588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:38.146 [2024-12-07 17:44:11.369596] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:38.146 [2024-12-07 17:44:11.369604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.369612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:38.146 [2024-12-07 17:44:11.369619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:27:38.146 [2024-12-07 17:44:11.369626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.369708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.369717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:38.146 [2024-12-07 17:44:11.369724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:38.146 [2024-12-07 17:44:11.369731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.369860] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:38.146 [2024-12-07 17:44:11.369881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:38.146 [2024-12-07 17:44:11.369890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.146 [2024-12-07 17:44:11.369899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.369907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:38.146 [2024-12-07 17:44:11.369914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.369921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:38.146 [2024-12-07 17:44:11.369928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:38.146 [2024-12-07 17:44:11.369935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:38.146 [2024-12-07 17:44:11.369942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.146 [2024-12-07 17:44:11.369949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:38.146 [2024-12-07 17:44:11.369956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:38.146 [2024-12-07 17:44:11.369963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:38.146 [2024-12-07 17:44:11.369978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:38.146 [2024-12-07 17:44:11.369998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:38.146 [2024-12-07 17:44:11.370005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:38.146 [2024-12-07 17:44:11.370019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:38.146 [2024-12-07 17:44:11.370041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:38.146 [2024-12-07 17:44:11.370060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:38.146 [2024-12-07 17:44:11.370080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:38.146 [2024-12-07 17:44:11.370100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:38.146 [2024-12-07 17:44:11.370120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.146 [2024-12-07 17:44:11.370133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:38.146 [2024-12-07 17:44:11.370140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:38.146 [2024-12-07 17:44:11.370147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:38.146 [2024-12-07 17:44:11.370153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:38.146 [2024-12-07 17:44:11.370160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:38.146 [2024-12-07 17:44:11.370166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:38.146 [2024-12-07 17:44:11.370179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:38.146 [2024-12-07 17:44:11.370186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:38.146 [2024-12-07 17:44:11.370200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:38.146 [2024-12-07 17:44:11.370208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:38.146 [2024-12-07 17:44:11.370224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:38.146 [2024-12-07 17:44:11.370231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:38.146 [2024-12-07 17:44:11.370237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:38.146 [2024-12-07 17:44:11.370244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:38.146 [2024-12-07 17:44:11.370250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:38.146 [2024-12-07 17:44:11.370257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:38.146 [2024-12-07 17:44:11.370265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:38.146 [2024-12-07 17:44:11.370274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:38.146 [2024-12-07 17:44:11.370291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:38.146 [2024-12-07 17:44:11.370299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:38.146 [2024-12-07 17:44:11.370306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:38.146 [2024-12-07 17:44:11.370313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:38.146 [2024-12-07 17:44:11.370320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:38.146 [2024-12-07 17:44:11.370327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:38.146 [2024-12-07 17:44:11.370334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:38.146 [2024-12-07 17:44:11.370340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:38.146 [2024-12-07 17:44:11.370347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:38.146 [2024-12-07 17:44:11.370382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:38.146 [2024-12-07 17:44:11.370390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370398] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:38.146 [2024-12-07 17:44:11.370405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:38.146 [2024-12-07 17:44:11.370413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:38.146 [2024-12-07 17:44:11.370420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:38.146 [2024-12-07 17:44:11.370427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.370434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:38.146 [2024-12-07 17:44:11.370446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:27:38.146 [2024-12-07 17:44:11.370454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.399216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.399260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.146 [2024-12-07 17:44:11.399271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.716 ms 00:27:38.146 [2024-12-07 17:44:11.399283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.399371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.399380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.146 [2024-12-07 17:44:11.399389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:38.146 [2024-12-07 17:44:11.399397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.448895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.448951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.146 [2024-12-07 17:44:11.448964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.445 ms 00:27:38.146 [2024-12-07 17:44:11.448973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.449035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.449046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.146 [2024-12-07 17:44:11.449059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:38.146 [2024-12-07 17:44:11.449067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.449683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.449725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.146 [2024-12-07 17:44:11.449736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:27:38.146 [2024-12-07 17:44:11.449744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.449898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.449909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.146 [2024-12-07 17:44:11.449925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:38.146 [2024-12-07 17:44:11.449932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.465582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.465625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.146 [2024-12-07 17:44:11.465636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.630 ms 00:27:38.146 [2024-12-07 17:44:11.465644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.479876] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:38.146 [2024-12-07 17:44:11.479923] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:38.146 [2024-12-07 17:44:11.479937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.479946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:38.146 [2024-12-07 17:44:11.479955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.181 ms 00:27:38.146 [2024-12-07 17:44:11.479962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.510075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.510125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:38.146 [2024-12-07 17:44:11.510137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.049 ms 00:27:38.146 [2024-12-07 17:44:11.510146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.146 [2024-12-07 17:44:11.523313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.146 [2024-12-07 17:44:11.523355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:38.146 [2024-12-07 17:44:11.523366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.112 ms 00:27:38.146 [2024-12-07 17:44:11.523374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.535844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.535887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:38.407 [2024-12-07 17:44:11.535899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.422 ms 00:27:38.407 [2024-12-07 17:44:11.535905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.536546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.536578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.407 [2024-12-07 17:44:11.536591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:27:38.407 [2024-12-07 17:44:11.536599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.603306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.603369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:38.407 [2024-12-07 17:44:11.603390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.686 ms 00:27:38.407 [2024-12-07 17:44:11.603400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.614588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.407 [2024-12-07 17:44:11.617637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.617679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.407 [2024-12-07 17:44:11.617693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.182 ms 00:27:38.407 [2024-12-07 17:44:11.617703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.617789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.617801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:38.407 [2024-12-07 17:44:11.617814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:38.407 [2024-12-07 17:44:11.617822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.619531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.619578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.407 [2024-12-07 17:44:11.619589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:27:38.407 [2024-12-07 17:44:11.619598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.619627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.619636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.407 [2024-12-07 17:44:11.619645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:38.407 [2024-12-07 17:44:11.619654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.619701] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:38.407 [2024-12-07 17:44:11.619712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.619720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:38.407 [2024-12-07 17:44:11.619729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:38.407 [2024-12-07 17:44:11.619738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.645549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.645597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.407 [2024-12-07 17:44:11.645615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.792 ms 00:27:38.407 [2024-12-07 17:44:11.645624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.645710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.407 [2024-12-07 17:44:11.645720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.407 [2024-12-07 17:44:11.645730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:38.407 [2024-12-07 17:44:11.645739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.407 [2024-12-07 17:44:11.647395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.151 ms, result 0 00:27:39.787  [2024-12-07T17:44:14.113Z] Copying: 996/1048576 [kB] (996 kBps) [2024-12-07T17:44:15.056Z] Copying: 4376/1048576 [kB] (3380 kBps) [2024-12-07T17:44:15.995Z] Copying: 24/1024 [MB] (20 MBps) [2024-12-07T17:44:16.938Z] Copying: 53/1024 [MB] (28 MBps) [2024-12-07T17:44:17.882Z] Copying: 82/1024 [MB] (28 MBps) [2024-12-07T17:44:19.273Z] Copying: 97/1024 [MB] (15 MBps) [2024-12-07T17:44:19.846Z] Copying: 113/1024 [MB] (15 MBps) [2024-12-07T17:44:21.232Z] Copying: 129/1024 [MB] (15 MBps) [2024-12-07T17:44:22.176Z] Copying: 144/1024 [MB] (15 MBps) [2024-12-07T17:44:23.119Z] Copying: 180/1024 [MB] (35 MBps) [2024-12-07T17:44:24.061Z] Copying: 207/1024 [MB] (26 MBps) [2024-12-07T17:44:25.008Z] Copying: 236/1024 [MB] (29 MBps) [2024-12-07T17:44:25.958Z] Copying: 263/1024 [MB] (26 MBps) [2024-12-07T17:44:26.902Z] Copying: 291/1024 [MB] (27 MBps) [2024-12-07T17:44:27.846Z] Copying: 313/1024 [MB] (22 MBps) [2024-12-07T17:44:29.234Z] Copying: 337/1024 [MB] (23 MBps) [2024-12-07T17:44:30.177Z] Copying: 353/1024 [MB] (15 MBps) [2024-12-07T17:44:31.118Z] Copying: 368/1024 [MB] (15 MBps) [2024-12-07T17:44:32.062Z] Copying: 384/1024 [MB] (15 MBps) [2024-12-07T17:44:33.005Z] Copying: 399/1024 [MB] (15 MBps) [2024-12-07T17:44:33.957Z] Copying: 415/1024 [MB] (15 MBps) [2024-12-07T17:44:34.899Z] Copying: 436/1024 [MB] (20 MBps) [2024-12-07T17:44:35.841Z] Copying: 457/1024 [MB] (21 MBps) [2024-12-07T17:44:37.227Z] Copying: 478/1024 [MB] (21 MBps) [2024-12-07T17:44:38.217Z] Copying: 495/1024 [MB] (16 MBps) [2024-12-07T17:44:39.159Z] Copying: 513/1024 [MB] (17 MBps) [2024-12-07T17:44:40.101Z] Copying: 530/1024 [MB] (17 MBps) [2024-12-07T17:44:41.047Z] Copying: 551/1024 [MB] (20 MBps) [2024-12-07T17:44:41.994Z] Copying: 571/1024 [MB] (20 MBps) [2024-12-07T17:44:42.938Z] Copying: 587/1024 [MB] (15 MBps) [2024-12-07T17:44:43.882Z] Copying: 603/1024 [MB] (16 MBps) [2024-12-07T17:44:45.284Z] Copying: 623/1024 [MB] (20 MBps) [2024-12-07T17:44:45.856Z] Copying: 645/1024 [MB] (22 MBps) [2024-12-07T17:44:47.235Z] Copying: 661/1024 [MB] (15 MBps) [2024-12-07T17:44:48.172Z] Copying: 685/1024 [MB] (24 MBps) [2024-12-07T17:44:49.111Z] Copying: 702/1024 [MB] (16 MBps) [2024-12-07T17:44:50.044Z] Copying: 728/1024 [MB] (25 MBps) [2024-12-07T17:44:50.987Z] Copying: 757/1024 [MB] (29 MBps) [2024-12-07T17:44:51.929Z] Copying: 775/1024 [MB] (18 MBps) [2024-12-07T17:44:52.864Z] Copying: 791/1024 [MB] (15 MBps) [2024-12-07T17:44:54.250Z] Copying: 819/1024 [MB] (28 MBps) [2024-12-07T17:44:55.191Z] Copying: 849/1024 [MB] (29 MBps) [2024-12-07T17:44:56.133Z] Copying: 880/1024 [MB] (31 MBps) [2024-12-07T17:44:57.075Z] Copying: 897/1024 [MB] (17 MBps) [2024-12-07T17:44:58.019Z] Copying: 913/1024 [MB] (16 MBps) [2024-12-07T17:44:58.953Z] Copying: 929/1024 [MB] (16 MBps) [2024-12-07T17:44:59.894Z] Copying: 954/1024 [MB] (24 MBps) [2024-12-07T17:45:00.836Z] Copying: 977/1024 [MB] (22 MBps) [2024-12-07T17:45:01.774Z] Copying: 997/1024 [MB] (20 MBps) [2024-12-07T17:45:01.774Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-07 17:45:01.759318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.392 [2024-12-07 17:45:01.759397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.392 [2024-12-07 17:45:01.759417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:28.392 [2024-12-07 17:45:01.759428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.392 [2024-12-07 17:45:01.759457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.392 [2024-12-07 17:45:01.763132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.392 [2024-12-07 17:45:01.763173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.392 [2024-12-07 17:45:01.763185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:28:28.392 [2024-12-07 17:45:01.763193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.392 [2024-12-07 17:45:01.763413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.393 [2024-12-07 17:45:01.763431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.393 [2024-12-07 17:45:01.763445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:28:28.393 [2024-12-07 17:45:01.763452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.776748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.776799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:28.657 [2024-12-07 17:45:01.776813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.277 ms 00:28:28.657 [2024-12-07 17:45:01.776822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.783126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.783161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:28.657 [2024-12-07 17:45:01.783181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:28:28.657 [2024-12-07 17:45:01.783189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.810370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.810415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:28.657 [2024-12-07 17:45:01.810427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.122 ms 00:28:28.657 [2024-12-07 17:45:01.810434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.827265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.827307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:28.657 [2024-12-07 17:45:01.827319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.782 ms 00:28:28.657 [2024-12-07 17:45:01.827327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.831105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.831146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:28.657 [2024-12-07 17:45:01.831157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:28:28.657 [2024-12-07 17:45:01.831172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.857393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.857433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:28.657 [2024-12-07 17:45:01.857445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.205 ms 00:28:28.657 [2024-12-07 17:45:01.857451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.882654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.882693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:28.657 [2024-12-07 17:45:01.882703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.157 ms 00:28:28.657 [2024-12-07 17:45:01.882710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.907526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.907567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:28.657 [2024-12-07 17:45:01.907578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.771 ms 00:28:28.657 [2024-12-07 17:45:01.907584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.932742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.657 [2024-12-07 17:45:01.932780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:28.657 [2024-12-07 17:45:01.932790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:28:28.657 [2024-12-07 17:45:01.932798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.657 [2024-12-07 17:45:01.932841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.657 [2024-12-07 17:45:01.932856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:28.657 [2024-12-07 17:45:01.932867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:28.657 [2024-12-07 17:45:01.932876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.932996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.657 [2024-12-07 17:45:01.933285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.658 [2024-12-07 17:45:01.933653] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.658 [2024-12-07 17:45:01.933662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c15aa6c-8f57-483a-a574-e9de90c611d1 00:28:28.658 [2024-12-07 17:45:01.933670] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:28.658 [2024-12-07 17:45:01.933677] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160192 00:28:28.658 [2024-12-07 17:45:01.933690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158208 00:28:28.658 [2024-12-07 17:45:01.933698] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:28:28.658 [2024-12-07 17:45:01.933705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.658 [2024-12-07 17:45:01.933720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.658 [2024-12-07 17:45:01.933728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.658 [2024-12-07 17:45:01.933734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.658 [2024-12-07 17:45:01.933741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.658 [2024-12-07 17:45:01.933748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.658 [2024-12-07 17:45:01.933755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.658 [2024-12-07 17:45:01.933764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:28:28.658 [2024-12-07 17:45:01.933771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.947701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.658 [2024-12-07 17:45:01.947739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.658 [2024-12-07 17:45:01.947750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.911 ms 00:28:28.658 [2024-12-07 17:45:01.947758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.948197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.658 [2024-12-07 17:45:01.948210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.658 [2024-12-07 17:45:01.948220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:28:28.658 [2024-12-07 17:45:01.948227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.984991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.658 [2024-12-07 17:45:01.985033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.658 [2024-12-07 17:45:01.985044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.658 [2024-12-07 17:45:01.985052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.985117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.658 [2024-12-07 17:45:01.985126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.658 [2024-12-07 17:45:01.985134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.658 [2024-12-07 17:45:01.985143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.985236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.658 [2024-12-07 17:45:01.985246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.658 [2024-12-07 17:45:01.985255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.658 [2024-12-07 17:45:01.985263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.658 [2024-12-07 17:45:01.985279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.658 [2024-12-07 17:45:01.985287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.658 [2024-12-07 17:45:01.985295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.658 [2024-12-07 17:45:01.985303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.920 [2024-12-07 17:45:02.071596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.920 [2024-12-07 17:45:02.071649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.920 [2024-12-07 17:45:02.071662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.920 [2024-12-07 17:45:02.071670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.920 [2024-12-07 17:45:02.142374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.920 [2024-12-07 17:45:02.142428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.921 [2024-12-07 17:45:02.142440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.921 [2024-12-07 17:45:02.142538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.921 [2024-12-07 17:45:02.142624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.921 [2024-12-07 17:45:02.142761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.921 [2024-12-07 17:45:02.142820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.921 [2024-12-07 17:45:02.142891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.142899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.142948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.921 [2024-12-07 17:45:02.142969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.921 [2024-12-07 17:45:02.142979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.921 [2024-12-07 17:45:02.143014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.921 [2024-12-07 17:45:02.143157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 383.810 ms, result 0 00:28:29.864 00:28:29.864 00:28:29.864 17:45:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:31.242 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:31.242 17:45:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:31.242 [2024-12-07 17:45:04.597519] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:28:31.242 [2024-12-07 17:45:04.597644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82155 ] 00:28:31.503 [2024-12-07 17:45:04.756574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.504 [2024-12-07 17:45:04.879115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.076 [2024-12-07 17:45:05.178223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.076 [2024-12-07 17:45:05.178314] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:32.076 [2024-12-07 17:45:05.341833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.341900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:32.076 [2024-12-07 17:45:05.341915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:32.076 [2024-12-07 17:45:05.341924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.341999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.342013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.076 [2024-12-07 17:45:05.342023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:32.076 [2024-12-07 17:45:05.342031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.342053] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:32.076 [2024-12-07 17:45:05.342969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:32.076 [2024-12-07 17:45:05.343038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.343048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.076 [2024-12-07 17:45:05.343058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:28:32.076 [2024-12-07 17:45:05.343066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.344943] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:32.076 [2024-12-07 17:45:05.359576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.359643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:32.076 [2024-12-07 17:45:05.359657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.635 ms 00:28:32.076 [2024-12-07 17:45:05.359666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.359752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.359762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:32.076 [2024-12-07 17:45:05.359771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:32.076 [2024-12-07 17:45:05.359779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.368214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.368263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.076 [2024-12-07 17:45:05.368273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.354 ms 00:28:32.076 [2024-12-07 17:45:05.368288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.368372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.368382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.076 [2024-12-07 17:45:05.368391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:32.076 [2024-12-07 17:45:05.368399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.368444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.368455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:32.076 [2024-12-07 17:45:05.368463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:32.076 [2024-12-07 17:45:05.368471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.368499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.076 [2024-12-07 17:45:05.372515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.372558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.076 [2024-12-07 17:45:05.372572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:28:32.076 [2024-12-07 17:45:05.372580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.372620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.372629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:32.076 [2024-12-07 17:45:05.372638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:32.076 [2024-12-07 17:45:05.372645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.372698] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:32.076 [2024-12-07 17:45:05.372723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:32.076 [2024-12-07 17:45:05.372761] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:32.076 [2024-12-07 17:45:05.372780] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:32.076 [2024-12-07 17:45:05.372891] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:32.076 [2024-12-07 17:45:05.372903] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:32.076 [2024-12-07 17:45:05.372914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:32.076 [2024-12-07 17:45:05.372924] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:32.076 [2024-12-07 17:45:05.372934] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:32.076 [2024-12-07 17:45:05.372943] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:32.076 [2024-12-07 17:45:05.372952] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:32.076 [2024-12-07 17:45:05.372962] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:32.076 [2024-12-07 17:45:05.372970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:32.076 [2024-12-07 17:45:05.372995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.373003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:32.076 [2024-12-07 17:45:05.373012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:28:32.076 [2024-12-07 17:45:05.373019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.373102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.076 [2024-12-07 17:45:05.373112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:32.076 [2024-12-07 17:45:05.373121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:32.076 [2024-12-07 17:45:05.373129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.076 [2024-12-07 17:45:05.373241] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:32.076 [2024-12-07 17:45:05.373299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:32.076 [2024-12-07 17:45:05.373309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:32.076 [2024-12-07 17:45:05.373334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:32.076 [2024-12-07 17:45:05.373356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.076 [2024-12-07 17:45:05.373371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:32.076 [2024-12-07 17:45:05.373377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:32.076 [2024-12-07 17:45:05.373400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:32.076 [2024-12-07 17:45:05.373416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:32.076 [2024-12-07 17:45:05.373423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:32.076 [2024-12-07 17:45:05.373430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:32.076 [2024-12-07 17:45:05.373445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:32.076 [2024-12-07 17:45:05.373468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:32.076 [2024-12-07 17:45:05.373489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:32.076 [2024-12-07 17:45:05.373511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:32.076 [2024-12-07 17:45:05.373533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:32.076 [2024-12-07 17:45:05.373554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.076 [2024-12-07 17:45:05.373568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:32.076 [2024-12-07 17:45:05.373574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:32.076 [2024-12-07 17:45:05.373581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:32.076 [2024-12-07 17:45:05.373588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:32.076 [2024-12-07 17:45:05.373595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:32.076 [2024-12-07 17:45:05.373602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:32.076 [2024-12-07 17:45:05.373615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:32.076 [2024-12-07 17:45:05.373622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:32.076 [2024-12-07 17:45:05.373637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:32.076 [2024-12-07 17:45:05.373645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:32.076 [2024-12-07 17:45:05.373653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:32.076 [2024-12-07 17:45:05.373660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:32.077 [2024-12-07 17:45:05.373667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:32.077 [2024-12-07 17:45:05.373678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:32.077 [2024-12-07 17:45:05.373686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:32.077 [2024-12-07 17:45:05.373693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:32.077 [2024-12-07 17:45:05.373699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:32.077 [2024-12-07 17:45:05.373708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:32.077 [2024-12-07 17:45:05.373718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:32.077 [2024-12-07 17:45:05.373739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:32.077 [2024-12-07 17:45:05.373746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:32.077 [2024-12-07 17:45:05.373753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:32.077 [2024-12-07 17:45:05.373761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:32.077 [2024-12-07 17:45:05.373768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:32.077 [2024-12-07 17:45:05.373775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:32.077 [2024-12-07 17:45:05.373782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:32.077 [2024-12-07 17:45:05.373789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:32.077 [2024-12-07 17:45:05.373796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:32.077 [2024-12-07 17:45:05.373833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:32.077 [2024-12-07 17:45:05.373841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:32.077 [2024-12-07 17:45:05.373857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:32.077 [2024-12-07 17:45:05.373864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:32.077 [2024-12-07 17:45:05.373870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:32.077 [2024-12-07 17:45:05.373879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.077 [2024-12-07 17:45:05.373886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:32.077 [2024-12-07 17:45:05.373894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:28:32.077 [2024-12-07 17:45:05.373909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.077 [2024-12-07 17:45:05.406621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.077 [2024-12-07 17:45:05.406678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.077 [2024-12-07 17:45:05.406692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.663 ms 00:28:32.077 [2024-12-07 17:45:05.406706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.077 [2024-12-07 17:45:05.406802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.077 [2024-12-07 17:45:05.406812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:32.077 [2024-12-07 17:45:05.406821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:32.077 [2024-12-07 17:45:05.406830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.455988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.456047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.353 [2024-12-07 17:45:05.456062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.086 ms 00:28:32.353 [2024-12-07 17:45:05.456071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.456123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.456134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.353 [2024-12-07 17:45:05.456148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:32.353 [2024-12-07 17:45:05.456157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.456775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.456822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.353 [2024-12-07 17:45:05.456834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:28:32.353 [2024-12-07 17:45:05.456842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.457025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.457036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.353 [2024-12-07 17:45:05.457052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:28:32.353 [2024-12-07 17:45:05.457060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.473724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.473772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.353 [2024-12-07 17:45:05.473785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.642 ms 00:28:32.353 [2024-12-07 17:45:05.473793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.488685] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:32.353 [2024-12-07 17:45:05.488741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:32.353 [2024-12-07 17:45:05.488755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.488764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:32.353 [2024-12-07 17:45:05.488773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.844 ms 00:28:32.353 [2024-12-07 17:45:05.488781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.515256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.515310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:32.353 [2024-12-07 17:45:05.515323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.419 ms 00:28:32.353 [2024-12-07 17:45:05.515331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.528647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.528697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:32.353 [2024-12-07 17:45:05.528710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.247 ms 00:28:32.353 [2024-12-07 17:45:05.528718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.541419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.541472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:32.353 [2024-12-07 17:45:05.541484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.651 ms 00:28:32.353 [2024-12-07 17:45:05.541493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.353 [2024-12-07 17:45:05.542179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.353 [2024-12-07 17:45:05.542209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:32.353 [2024-12-07 17:45:05.542224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:28:32.354 [2024-12-07 17:45:05.542232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.607880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.607942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:32.354 [2024-12-07 17:45:05.607964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.626 ms 00:28:32.354 [2024-12-07 17:45:05.607973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.619184] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:32.354 [2024-12-07 17:45:05.622178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.622223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:32.354 [2024-12-07 17:45:05.622234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:28:32.354 [2024-12-07 17:45:05.622251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.622337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.622348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:32.354 [2024-12-07 17:45:05.622361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:32.354 [2024-12-07 17:45:05.622370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.623219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.623268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:32.354 [2024-12-07 17:45:05.623280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:28:32.354 [2024-12-07 17:45:05.623290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.623321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.623331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:32.354 [2024-12-07 17:45:05.623341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:32.354 [2024-12-07 17:45:05.623350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.623396] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:32.354 [2024-12-07 17:45:05.623408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.623417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:32.354 [2024-12-07 17:45:05.623427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:32.354 [2024-12-07 17:45:05.623436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.649493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.649549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:32.354 [2024-12-07 17:45:05.649567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:28:32.354 [2024-12-07 17:45:05.649576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.649662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.354 [2024-12-07 17:45:05.649674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:32.354 [2024-12-07 17:45:05.649683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:32.354 [2024-12-07 17:45:05.649691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.354 [2024-12-07 17:45:05.651405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.045 ms, result 0 00:28:33.777  [2024-12-07T17:45:08.097Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-07T17:45:09.039Z] Copying: 42/1024 [MB] (26 MBps) [2024-12-07T17:45:09.981Z] Copying: 59/1024 [MB] (17 MBps) [2024-12-07T17:45:10.924Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-07T17:45:11.868Z] Copying: 84/1024 [MB] (10 MBps) [2024-12-07T17:45:13.254Z] Copying: 104/1024 [MB] (20 MBps) [2024-12-07T17:45:13.844Z] Copying: 124/1024 [MB] (20 MBps) [2024-12-07T17:45:15.231Z] Copying: 145/1024 [MB] (20 MBps) [2024-12-07T17:45:16.170Z] Copying: 164/1024 [MB] (18 MBps) [2024-12-07T17:45:17.113Z] Copying: 182/1024 [MB] (18 MBps) [2024-12-07T17:45:18.059Z] Copying: 201/1024 [MB] (18 MBps) [2024-12-07T17:45:19.005Z] Copying: 216/1024 [MB] (14 MBps) [2024-12-07T17:45:19.948Z] Copying: 231/1024 [MB] (15 MBps) [2024-12-07T17:45:20.889Z] Copying: 253/1024 [MB] (22 MBps) [2024-12-07T17:45:22.278Z] Copying: 275/1024 [MB] (21 MBps) [2024-12-07T17:45:22.854Z] Copying: 295/1024 [MB] (20 MBps) [2024-12-07T17:45:24.240Z] Copying: 310/1024 [MB] (15 MBps) [2024-12-07T17:45:25.185Z] Copying: 321/1024 [MB] (10 MBps) [2024-12-07T17:45:26.125Z] Copying: 333/1024 [MB] (11 MBps) [2024-12-07T17:45:27.067Z] Copying: 343/1024 [MB] (10 MBps) [2024-12-07T17:45:28.010Z] Copying: 353/1024 [MB] (10 MBps) [2024-12-07T17:45:28.952Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-07T17:45:29.895Z] Copying: 377/1024 [MB] (13 MBps) [2024-12-07T17:45:31.282Z] Copying: 390/1024 [MB] (12 MBps) [2024-12-07T17:45:31.854Z] Copying: 405/1024 [MB] (14 MBps) [2024-12-07T17:45:33.238Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-07T17:45:34.183Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-07T17:45:35.129Z] Copying: 437/1024 [MB] (10 MBps) [2024-12-07T17:45:36.129Z] Copying: 448/1024 [MB] (10 MBps) [2024-12-07T17:45:37.070Z] Copying: 458/1024 [MB] (10 MBps) [2024-12-07T17:45:38.028Z] Copying: 469/1024 [MB] (10 MBps) [2024-12-07T17:45:38.974Z] Copying: 479/1024 [MB] (10 MBps) [2024-12-07T17:45:39.921Z] Copying: 490/1024 [MB] (10 MBps) [2024-12-07T17:45:40.864Z] Copying: 501/1024 [MB] (10 MBps) [2024-12-07T17:45:42.254Z] Copying: 522/1024 [MB] (21 MBps) [2024-12-07T17:45:43.192Z] Copying: 534/1024 [MB] (12 MBps) [2024-12-07T17:45:44.135Z] Copying: 552/1024 [MB] (17 MBps) [2024-12-07T17:45:45.080Z] Copying: 569/1024 [MB] (16 MBps) [2024-12-07T17:45:46.026Z] Copying: 583/1024 [MB] (14 MBps) [2024-12-07T17:45:46.964Z] Copying: 599/1024 [MB] (15 MBps) [2024-12-07T17:45:47.908Z] Copying: 631/1024 [MB] (32 MBps) [2024-12-07T17:45:48.870Z] Copying: 648/1024 [MB] (16 MBps) [2024-12-07T17:45:50.296Z] Copying: 672/1024 [MB] (23 MBps) [2024-12-07T17:45:50.867Z] Copying: 696/1024 [MB] (24 MBps) [2024-12-07T17:45:52.249Z] Copying: 717/1024 [MB] (21 MBps) [2024-12-07T17:45:53.195Z] Copying: 740/1024 [MB] (22 MBps) [2024-12-07T17:45:54.138Z] Copying: 762/1024 [MB] (22 MBps) [2024-12-07T17:45:55.085Z] Copying: 779/1024 [MB] (17 MBps) [2024-12-07T17:45:56.024Z] Copying: 795/1024 [MB] (15 MBps) [2024-12-07T17:45:56.965Z] Copying: 811/1024 [MB] (16 MBps) [2024-12-07T17:45:57.910Z] Copying: 827/1024 [MB] (15 MBps) [2024-12-07T17:45:58.856Z] Copying: 847/1024 [MB] (19 MBps) [2024-12-07T17:46:00.243Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-07T17:46:01.186Z] Copying: 871/1024 [MB] (13 MBps) [2024-12-07T17:46:02.129Z] Copying: 883/1024 [MB] (12 MBps) [2024-12-07T17:46:03.075Z] Copying: 895/1024 [MB] (11 MBps) [2024-12-07T17:46:04.030Z] Copying: 906/1024 [MB] (11 MBps) [2024-12-07T17:46:05.042Z] Copying: 920/1024 [MB] (13 MBps) [2024-12-07T17:46:05.991Z] Copying: 932/1024 [MB] (11 MBps) [2024-12-07T17:46:06.931Z] Copying: 944/1024 [MB] (11 MBps) [2024-12-07T17:46:07.873Z] Copying: 957/1024 [MB] (12 MBps) [2024-12-07T17:46:09.258Z] Copying: 967/1024 [MB] (10 MBps) [2024-12-07T17:46:10.199Z] Copying: 980/1024 [MB] (12 MBps) [2024-12-07T17:46:11.141Z] Copying: 992/1024 [MB] (11 MBps) [2024-12-07T17:46:12.082Z] Copying: 1002/1024 [MB] (10 MBps) [2024-12-07T17:46:12.653Z] Copying: 1014/1024 [MB] (12 MBps) [2024-12-07T17:46:12.653Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-07 17:46:12.640290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.271 [2024-12-07 17:46:12.640374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:39.271 [2024-12-07 17:46:12.640399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:39.271 [2024-12-07 17:46:12.640415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.271 [2024-12-07 17:46:12.640454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:39.271 [2024-12-07 17:46:12.646254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.271 [2024-12-07 17:46:12.646314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:39.271 [2024-12-07 17:46:12.646329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:29:39.271 [2024-12-07 17:46:12.646341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.271 [2024-12-07 17:46:12.646693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.271 [2024-12-07 17:46:12.646718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:39.271 [2024-12-07 17:46:12.646731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:29:39.271 [2024-12-07 17:46:12.646744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.652336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.652364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:39.532 [2024-12-07 17:46:12.652377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.572 ms 00:29:39.532 [2024-12-07 17:46:12.652393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.657871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.657896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:39.532 [2024-12-07 17:46:12.657905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.456 ms 00:29:39.532 [2024-12-07 17:46:12.657911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.677851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.677880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:39.532 [2024-12-07 17:46:12.677889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:29:39.532 [2024-12-07 17:46:12.677895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.690302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.690330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:39.532 [2024-12-07 17:46:12.690339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:29:39.532 [2024-12-07 17:46:12.690346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.693937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.693963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:39.532 [2024-12-07 17:46:12.693971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.557 ms 00:29:39.532 [2024-12-07 17:46:12.693977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.712560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.712584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:39.532 [2024-12-07 17:46:12.712592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.564 ms 00:29:39.532 [2024-12-07 17:46:12.712598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.730960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.730990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:39.532 [2024-12-07 17:46:12.730998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:29:39.532 [2024-12-07 17:46:12.731004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.748518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.748543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:39.532 [2024-12-07 17:46:12.748550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.488 ms 00:29:39.532 [2024-12-07 17:46:12.748556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.765995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.532 [2024-12-07 17:46:12.766019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:39.532 [2024-12-07 17:46:12.766026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.383 ms 00:29:39.532 [2024-12-07 17:46:12.766032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.532 [2024-12-07 17:46:12.766057] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:39.532 [2024-12-07 17:46:12.766072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:39.532 [2024-12-07 17:46:12.766083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:39.532 [2024-12-07 17:46:12.766090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:39.532 [2024-12-07 17:46:12.766096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:39.533 [2024-12-07 17:46:12.766603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:39.534 [2024-12-07 17:46:12.766672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:39.534 [2024-12-07 17:46:12.766678] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c15aa6c-8f57-483a-a574-e9de90c611d1 00:29:39.534 [2024-12-07 17:46:12.766686] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:39.534 [2024-12-07 17:46:12.766692] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:39.534 [2024-12-07 17:46:12.766697] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:39.534 [2024-12-07 17:46:12.766703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:39.534 [2024-12-07 17:46:12.766715] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:39.534 [2024-12-07 17:46:12.766722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:39.534 [2024-12-07 17:46:12.766727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:39.534 [2024-12-07 17:46:12.766732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:39.534 [2024-12-07 17:46:12.766736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:39.534 [2024-12-07 17:46:12.766742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.534 [2024-12-07 17:46:12.766748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:39.534 [2024-12-07 17:46:12.766755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:29:39.534 [2024-12-07 17:46:12.766763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.776761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.534 [2024-12-07 17:46:12.776785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:39.534 [2024-12-07 17:46:12.776793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.986 ms 00:29:39.534 [2024-12-07 17:46:12.776799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.777101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.534 [2024-12-07 17:46:12.777118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:39.534 [2024-12-07 17:46:12.777125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:29:39.534 [2024-12-07 17:46:12.777131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.804571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.534 [2024-12-07 17:46:12.804599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:39.534 [2024-12-07 17:46:12.804607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.534 [2024-12-07 17:46:12.804613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.804660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.534 [2024-12-07 17:46:12.804669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:39.534 [2024-12-07 17:46:12.804675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.534 [2024-12-07 17:46:12.804681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.804726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.534 [2024-12-07 17:46:12.804734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:39.534 [2024-12-07 17:46:12.804741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.534 [2024-12-07 17:46:12.804747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.804759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.534 [2024-12-07 17:46:12.804766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:39.534 [2024-12-07 17:46:12.804775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.534 [2024-12-07 17:46:12.804782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.534 [2024-12-07 17:46:12.867284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.534 [2024-12-07 17:46:12.867319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:39.534 [2024-12-07 17:46:12.867327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.534 [2024-12-07 17:46:12.867334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:39.795 [2024-12-07 17:46:12.918336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:39.795 [2024-12-07 17:46:12.918433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:39.795 [2024-12-07 17:46:12.918484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:39.795 [2024-12-07 17:46:12.918587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:39.795 [2024-12-07 17:46:12.918640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:39.795 [2024-12-07 17:46:12.918701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:39.795 [2024-12-07 17:46:12.918754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:39.795 [2024-12-07 17:46:12.918761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:39.795 [2024-12-07 17:46:12.918769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.795 [2024-12-07 17:46:12.918879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 278.581 ms, result 0 00:29:40.368 00:29:40.368 00:29:40.368 17:46:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:42.282 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:42.282 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:42.282 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:42.282 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:42.282 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:42.282 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80128 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80128 ']' 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80128 00:29:42.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80128) - No such process 00:29:42.283 Process with pid 80128 is not found 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80128 is not found' 00:29:42.283 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:42.543 Remove shared memory files 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:42.543 17:46:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:42.543 00:29:42.544 real 4m22.945s 00:29:42.544 user 4m45.942s 00:29:42.544 sys 0m26.043s 00:29:42.544 17:46:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.544 ************************************ 00:29:42.544 END TEST ftl_dirty_shutdown 00:29:42.544 ************************************ 00:29:42.544 17:46:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.544 17:46:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:42.544 17:46:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:42.544 17:46:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.544 17:46:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:42.544 ************************************ 00:29:42.544 START TEST ftl_upgrade_shutdown 00:29:42.544 ************************************ 00:29:42.544 17:46:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:42.804 * Looking for test storage... 00:29:42.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.804 17:46:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:42.804 17:46:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:42.804 17:46:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.804 --rc genhtml_branch_coverage=1 00:29:42.804 --rc genhtml_function_coverage=1 00:29:42.804 --rc genhtml_legend=1 00:29:42.804 --rc geninfo_all_blocks=1 00:29:42.804 --rc geninfo_unexecuted_blocks=1 00:29:42.804 00:29:42.804 ' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.804 --rc genhtml_branch_coverage=1 00:29:42.804 --rc genhtml_function_coverage=1 00:29:42.804 --rc genhtml_legend=1 00:29:42.804 --rc geninfo_all_blocks=1 00:29:42.804 --rc geninfo_unexecuted_blocks=1 00:29:42.804 00:29:42.804 ' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.804 --rc genhtml_branch_coverage=1 00:29:42.804 --rc genhtml_function_coverage=1 00:29:42.804 --rc genhtml_legend=1 00:29:42.804 --rc geninfo_all_blocks=1 00:29:42.804 --rc geninfo_unexecuted_blocks=1 00:29:42.804 00:29:42.804 ' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.804 --rc genhtml_branch_coverage=1 00:29:42.804 --rc genhtml_function_coverage=1 00:29:42.804 --rc genhtml_legend=1 00:29:42.804 --rc geninfo_all_blocks=1 00:29:42.804 --rc geninfo_unexecuted_blocks=1 00:29:42.804 00:29:42.804 ' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82949 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82949 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:42.804 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82949 ']' 00:29:42.805 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.805 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.805 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.805 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.805 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.805 [2024-12-07 17:46:16.151449] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:42.805 [2024-12-07 17:46:16.151747] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82949 ] 00:29:43.065 [2024-12-07 17:46:16.307403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.065 [2024-12-07 17:46:16.401712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:43.634 17:46:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:43.894 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:44.162 { 00:29:44.162 "name": "basen1", 00:29:44.162 "aliases": [ 00:29:44.162 "9b2692be-59b5-467d-9d82-cc456bd4d4f1" 00:29:44.162 ], 00:29:44.162 "product_name": "NVMe disk", 00:29:44.162 "block_size": 4096, 00:29:44.162 "num_blocks": 1310720, 00:29:44.162 "uuid": "9b2692be-59b5-467d-9d82-cc456bd4d4f1", 00:29:44.162 "numa_id": -1, 00:29:44.162 "assigned_rate_limits": { 00:29:44.162 "rw_ios_per_sec": 0, 00:29:44.162 "rw_mbytes_per_sec": 0, 00:29:44.162 "r_mbytes_per_sec": 0, 00:29:44.162 "w_mbytes_per_sec": 0 00:29:44.162 }, 00:29:44.162 "claimed": true, 00:29:44.162 "claim_type": "read_many_write_one", 00:29:44.162 "zoned": false, 00:29:44.162 "supported_io_types": { 00:29:44.162 "read": true, 00:29:44.162 "write": true, 00:29:44.162 "unmap": true, 00:29:44.162 "flush": true, 00:29:44.162 "reset": true, 00:29:44.162 "nvme_admin": true, 00:29:44.162 "nvme_io": true, 00:29:44.162 "nvme_io_md": false, 00:29:44.162 "write_zeroes": true, 00:29:44.162 "zcopy": false, 00:29:44.162 "get_zone_info": false, 00:29:44.162 "zone_management": false, 00:29:44.162 "zone_append": false, 00:29:44.162 "compare": true, 00:29:44.162 "compare_and_write": false, 00:29:44.162 "abort": true, 00:29:44.162 "seek_hole": false, 00:29:44.162 "seek_data": false, 00:29:44.162 "copy": true, 00:29:44.162 "nvme_iov_md": false 00:29:44.162 }, 00:29:44.162 "driver_specific": { 00:29:44.162 "nvme": [ 00:29:44.162 { 00:29:44.162 "pci_address": "0000:00:11.0", 00:29:44.162 "trid": { 00:29:44.162 "trtype": "PCIe", 00:29:44.162 "traddr": "0000:00:11.0" 00:29:44.162 }, 00:29:44.162 "ctrlr_data": { 00:29:44.162 "cntlid": 0, 00:29:44.162 "vendor_id": "0x1b36", 00:29:44.162 "model_number": "QEMU NVMe Ctrl", 00:29:44.162 "serial_number": "12341", 00:29:44.162 "firmware_revision": "8.0.0", 00:29:44.162 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:44.162 "oacs": { 00:29:44.162 "security": 0, 00:29:44.162 "format": 1, 00:29:44.162 "firmware": 0, 00:29:44.162 "ns_manage": 1 00:29:44.162 }, 00:29:44.162 "multi_ctrlr": false, 00:29:44.162 "ana_reporting": false 00:29:44.162 }, 00:29:44.162 "vs": { 00:29:44.162 "nvme_version": "1.4" 00:29:44.162 }, 00:29:44.162 "ns_data": { 00:29:44.162 "id": 1, 00:29:44.162 "can_share": false 00:29:44.162 } 00:29:44.162 } 00:29:44.162 ], 00:29:44.162 "mp_policy": "active_passive" 00:29:44.162 } 00:29:44.162 } 00:29:44.162 ]' 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:44.162 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:44.423 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=20f20238-93d7-4e14-83de-257d730dff97 00:29:44.423 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:44.423 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20f20238-93d7-4e14-83de-257d730dff97 00:29:44.684 17:46:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:44.943 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0b8fc061-2755-47d0-bc50-ca01517cbbb7 00:29:44.943 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0b8fc061-2755-47d0-bc50-ca01517cbbb7 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 06b89e29-b4d8-4fd3-a5f1-a1a52382eaec ]] 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 5120 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 00:29:45.202 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06b89e29-b4d8-4fd3-a5f1-a1a52382eaec 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:45.203 { 00:29:45.203 "name": "06b89e29-b4d8-4fd3-a5f1-a1a52382eaec", 00:29:45.203 "aliases": [ 00:29:45.203 "lvs/basen1p0" 00:29:45.203 ], 00:29:45.203 "product_name": "Logical Volume", 00:29:45.203 "block_size": 4096, 00:29:45.203 "num_blocks": 5242880, 00:29:45.203 "uuid": "06b89e29-b4d8-4fd3-a5f1-a1a52382eaec", 00:29:45.203 "assigned_rate_limits": { 00:29:45.203 "rw_ios_per_sec": 0, 00:29:45.203 "rw_mbytes_per_sec": 0, 00:29:45.203 "r_mbytes_per_sec": 0, 00:29:45.203 "w_mbytes_per_sec": 0 00:29:45.203 }, 00:29:45.203 "claimed": false, 00:29:45.203 "zoned": false, 00:29:45.203 "supported_io_types": { 00:29:45.203 "read": true, 00:29:45.203 "write": true, 00:29:45.203 "unmap": true, 00:29:45.203 "flush": false, 00:29:45.203 "reset": true, 00:29:45.203 "nvme_admin": false, 00:29:45.203 "nvme_io": false, 00:29:45.203 "nvme_io_md": false, 00:29:45.203 "write_zeroes": true, 00:29:45.203 "zcopy": false, 00:29:45.203 "get_zone_info": false, 00:29:45.203 "zone_management": false, 00:29:45.203 "zone_append": false, 00:29:45.203 "compare": false, 00:29:45.203 "compare_and_write": false, 00:29:45.203 "abort": false, 00:29:45.203 "seek_hole": true, 00:29:45.203 "seek_data": true, 00:29:45.203 "copy": false, 00:29:45.203 "nvme_iov_md": false 00:29:45.203 }, 00:29:45.203 "driver_specific": { 00:29:45.203 "lvol": { 00:29:45.203 "lvol_store_uuid": "0b8fc061-2755-47d0-bc50-ca01517cbbb7", 00:29:45.203 "base_bdev": "basen1", 00:29:45.203 "thin_provision": true, 00:29:45.203 "num_allocated_clusters": 0, 00:29:45.203 "snapshot": false, 00:29:45.203 "clone": false, 00:29:45.203 "esnap_clone": false 00:29:45.203 } 00:29:45.203 } 00:29:45.203 } 00:29:45.203 ]' 00:29:45.203 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:45.463 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:45.723 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:45.723 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:45.723 17:46:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:45.723 17:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:45.723 17:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:45.723 17:46:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 06b89e29-b4d8-4fd3-a5f1-a1a52382eaec -c cachen1p0 --l2p_dram_limit 2 00:29:45.983 [2024-12-07 17:46:19.277961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.278014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:45.983 [2024-12-07 17:46:19.278027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:45.983 [2024-12-07 17:46:19.278034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.278078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.278086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:45.983 [2024-12-07 17:46:19.278094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:29:45.983 [2024-12-07 17:46:19.278100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.278116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:45.983 [2024-12-07 17:46:19.278625] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:45.983 [2024-12-07 17:46:19.278647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.278653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:45.983 [2024-12-07 17:46:19.278663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:29:45.983 [2024-12-07 17:46:19.278669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.278691] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 57c8a49a-9a61-44e5-ae38-76b7064ca4ec 00:29:45.983 [2024-12-07 17:46:19.280069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.280100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:45.983 [2024-12-07 17:46:19.280109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:45.983 [2024-12-07 17:46:19.280117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.287072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.287102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:45.983 [2024-12-07 17:46:19.287109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.897 ms 00:29:45.983 [2024-12-07 17:46:19.287117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.287149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.287157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:45.983 [2024-12-07 17:46:19.287163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:45.983 [2024-12-07 17:46:19.287173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.287210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.287219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:45.983 [2024-12-07 17:46:19.287227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:45.983 [2024-12-07 17:46:19.287235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.287252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:45.983 [2024-12-07 17:46:19.290516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.290541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:45.983 [2024-12-07 17:46:19.290552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.266 ms 00:29:45.983 [2024-12-07 17:46:19.290558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.290582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.983 [2024-12-07 17:46:19.290589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:45.983 [2024-12-07 17:46:19.290602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:45.983 [2024-12-07 17:46:19.290607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.983 [2024-12-07 17:46:19.290622] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:45.984 [2024-12-07 17:46:19.290735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:45.984 [2024-12-07 17:46:19.290748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:45.984 [2024-12-07 17:46:19.290757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:45.984 [2024-12-07 17:46:19.290767] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:45.984 [2024-12-07 17:46:19.290775] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:45.984 [2024-12-07 17:46:19.290782] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:45.984 [2024-12-07 17:46:19.290788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:45.984 [2024-12-07 17:46:19.290798] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:45.984 [2024-12-07 17:46:19.290804] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:45.984 [2024-12-07 17:46:19.290811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.984 [2024-12-07 17:46:19.290817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:45.984 [2024-12-07 17:46:19.290825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:29:45.984 [2024-12-07 17:46:19.290830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.984 [2024-12-07 17:46:19.290896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.984 [2024-12-07 17:46:19.290907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:45.984 [2024-12-07 17:46:19.290915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:45.984 [2024-12-07 17:46:19.290921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.984 [2024-12-07 17:46:19.291016] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:45.984 [2024-12-07 17:46:19.291025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:45.984 [2024-12-07 17:46:19.291033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:45.984 [2024-12-07 17:46:19.291053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:45.984 [2024-12-07 17:46:19.291065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:45.984 [2024-12-07 17:46:19.291073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:45.984 [2024-12-07 17:46:19.291079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:45.984 [2024-12-07 17:46:19.291092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:45.984 [2024-12-07 17:46:19.291099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:45.984 [2024-12-07 17:46:19.291111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:45.984 [2024-12-07 17:46:19.291116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:45.984 [2024-12-07 17:46:19.291130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:45.984 [2024-12-07 17:46:19.291136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:45.984 [2024-12-07 17:46:19.291149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:45.984 [2024-12-07 17:46:19.291169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:45.984 [2024-12-07 17:46:19.291188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:45.984 [2024-12-07 17:46:19.291205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:45.984 [2024-12-07 17:46:19.291225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:45.984 [2024-12-07 17:46:19.291244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:45.984 [2024-12-07 17:46:19.291261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:45.984 [2024-12-07 17:46:19.291278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:45.984 [2024-12-07 17:46:19.291285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:45.984 [2024-12-07 17:46:19.291297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:45.984 [2024-12-07 17:46:19.291302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:45.984 [2024-12-07 17:46:19.291316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:45.984 [2024-12-07 17:46:19.291324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:45.984 [2024-12-07 17:46:19.291330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:45.984 [2024-12-07 17:46:19.291336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:45.984 [2024-12-07 17:46:19.291341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:45.984 [2024-12-07 17:46:19.291347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:45.984 [2024-12-07 17:46:19.291358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:45.984 [2024-12-07 17:46:19.291369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:45.984 [2024-12-07 17:46:19.291383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:45.984 [2024-12-07 17:46:19.291402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:45.984 [2024-12-07 17:46:19.291410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:45.984 [2024-12-07 17:46:19.291416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:45.984 [2024-12-07 17:46:19.291423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:45.984 [2024-12-07 17:46:19.291468] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:45.984 [2024-12-07 17:46:19.291476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:45.984 [2024-12-07 17:46:19.291489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:45.984 [2024-12-07 17:46:19.291495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:45.984 [2024-12-07 17:46:19.291503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:45.984 [2024-12-07 17:46:19.291509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.984 [2024-12-07 17:46:19.291516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:45.984 [2024-12-07 17:46:19.291522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:29:45.984 [2024-12-07 17:46:19.291529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.984 [2024-12-07 17:46:19.291571] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:45.984 [2024-12-07 17:46:19.291583] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:50.187 [2024-12-07 17:46:22.922473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:22.922575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:50.187 [2024-12-07 17:46:22.922597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3630.885 ms 00:29:50.187 [2024-12-07 17:46:22.922610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:22.959353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:22.959425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.187 [2024-12-07 17:46:22.959443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.478 ms 00:29:50.187 [2024-12-07 17:46:22.959455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:22.959546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:22.959560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:50.187 [2024-12-07 17:46:22.959570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:50.187 [2024-12-07 17:46:22.959588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:22.999963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:23.000032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.187 [2024-12-07 17:46:23.000046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.327 ms 00:29:50.187 [2024-12-07 17:46:23.000060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:23.000098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:23.000113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.187 [2024-12-07 17:46:23.000123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:50.187 [2024-12-07 17:46:23.000133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:23.000833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:23.000878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.187 [2024-12-07 17:46:23.000901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.645 ms 00:29:50.187 [2024-12-07 17:46:23.000913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:23.000965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:23.000978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.187 [2024-12-07 17:46:23.001010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:50.187 [2024-12-07 17:46:23.001026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.187 [2024-12-07 17:46:23.021388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.187 [2024-12-07 17:46:23.021440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.187 [2024-12-07 17:46:23.021452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.342 ms 00:29:50.187 [2024-12-07 17:46:23.021464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.047348] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:50.188 [2024-12-07 17:46:23.049279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.049327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:50.188 [2024-12-07 17:46:23.049345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.718 ms 00:29:50.188 [2024-12-07 17:46:23.049369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.082076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.082279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:50.188 [2024-12-07 17:46:23.082308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.655 ms 00:29:50.188 [2024-12-07 17:46:23.082319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.082647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.082680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:50.188 [2024-12-07 17:46:23.082698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:29:50.188 [2024-12-07 17:46:23.082707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.107668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.107717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:50.188 [2024-12-07 17:46:23.107734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.899 ms 00:29:50.188 [2024-12-07 17:46:23.107744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.132669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.132714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:50.188 [2024-12-07 17:46:23.132730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.866 ms 00:29:50.188 [2024-12-07 17:46:23.132738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.133400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.133431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:50.188 [2024-12-07 17:46:23.133445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.613 ms 00:29:50.188 [2024-12-07 17:46:23.133456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.223926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.223975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:50.188 [2024-12-07 17:46:23.224015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.420 ms 00:29:50.188 [2024-12-07 17:46:23.224025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.252767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.252814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:50.188 [2024-12-07 17:46:23.252830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.640 ms 00:29:50.188 [2024-12-07 17:46:23.252840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.278430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.278476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:50.188 [2024-12-07 17:46:23.278492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.537 ms 00:29:50.188 [2024-12-07 17:46:23.278500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.305462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.305508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:50.188 [2024-12-07 17:46:23.305525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.910 ms 00:29:50.188 [2024-12-07 17:46:23.305533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.305591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.305602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:50.188 [2024-12-07 17:46:23.305618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:50.188 [2024-12-07 17:46:23.305626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.305725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.188 [2024-12-07 17:46:23.305740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:50.188 [2024-12-07 17:46:23.305752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:50.188 [2024-12-07 17:46:23.305761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.188 [2024-12-07 17:46:23.307952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4029.402 ms, result 0 00:29:50.188 { 00:29:50.188 "name": "ftl", 00:29:50.188 "uuid": "57c8a49a-9a61-44e5-ae38-76b7064ca4ec" 00:29:50.188 } 00:29:50.188 17:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:50.188 [2024-12-07 17:46:23.526109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.188 17:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:50.449 17:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:50.710 [2024-12-07 17:46:23.946515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:50.710 17:46:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:50.971 [2024-12-07 17:46:24.151051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:50.971 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:51.232 Fill FTL, iteration 1 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83070 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83070 /var/tmp/spdk.tgt.sock 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83070 ']' 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:51.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.232 17:46:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:51.232 [2024-12-07 17:46:24.601767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:51.232 [2024-12-07 17:46:24.602115] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83070 ] 00:29:51.491 [2024-12-07 17:46:24.759960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.492 [2024-12-07 17:46:24.860331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.430 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.430 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:52.430 17:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:52.430 ftln1 00:29:52.430 17:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:52.430 17:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83070 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83070 ']' 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83070 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83070 00:29:52.691 killing process with pid 83070 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83070' 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83070 00:29:52.691 17:46:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83070 00:29:54.072 17:46:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:54.072 17:46:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:54.072 [2024-12-07 17:46:27.423115] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:54.072 [2024-12-07 17:46:27.423232] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83113 ] 00:29:54.331 [2024-12-07 17:46:27.578850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.331 [2024-12-07 17:46:27.652698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.702  [2024-12-07T17:46:30.016Z] Copying: 266/1024 [MB] (266 MBps) [2024-12-07T17:46:31.393Z] Copying: 527/1024 [MB] (261 MBps) [2024-12-07T17:46:31.960Z] Copying: 784/1024 [MB] (257 MBps) [2024-12-07T17:46:32.527Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:29:59.145 00:29:59.145 Calculate MD5 checksum, iteration 1 00:29:59.145 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:59.145 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:59.145 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.146 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.146 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.146 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.146 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.146 17:46:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:59.405 [2024-12-07 17:46:32.528801] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:29:59.405 [2024-12-07 17:46:32.529032] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83166 ] 00:29:59.405 [2024-12-07 17:46:32.676136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.405 [2024-12-07 17:46:32.750468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.899  [2024-12-07T17:46:35.224Z] Copying: 552/1024 [MB] (552 MBps) [2024-12-07T17:46:35.797Z] Copying: 1024/1024 [MB] (average 532 MBps) 00:30:02.415 00:30:02.415 17:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:02.415 17:46:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:04.965 Fill FTL, iteration 2 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=29e19a4751922585703ce0fe809f177c 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:04.965 17:46:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:04.965 [2024-12-07 17:46:37.970161] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:04.965 [2024-12-07 17:46:37.970259] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:30:04.966 [2024-12-07 17:46:38.120311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.966 [2024-12-07 17:46:38.211992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.353  [2024-12-07T17:46:40.674Z] Copying: 232/1024 [MB] (232 MBps) [2024-12-07T17:46:41.610Z] Copying: 478/1024 [MB] (246 MBps) [2024-12-07T17:46:42.554Z] Copying: 731/1024 [MB] (253 MBps) [2024-12-07T17:46:42.815Z] Copying: 960/1024 [MB] (229 MBps) [2024-12-07T17:46:43.759Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:30:10.377 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:10.377 Calculate MD5 checksum, iteration 2 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:10.377 17:46:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:10.377 [2024-12-07 17:46:43.508033] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:10.377 [2024-12-07 17:46:43.508283] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83296 ] 00:30:10.377 [2024-12-07 17:46:43.663770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.377 [2024-12-07 17:46:43.749818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.296  [2024-12-07T17:46:45.938Z] Copying: 564/1024 [MB] (564 MBps) [2024-12-07T17:46:46.871Z] Copying: 1024/1024 [MB] (average 592 MBps) 00:30:13.489 00:30:13.489 17:46:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:13.489 17:46:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fd945eaefdae6b992cf664860f4de993 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:15.392 [2024-12-07 17:46:48.639268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.392 [2024-12-07 17:46:48.639462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:15.392 [2024-12-07 17:46:48.639480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:15.392 [2024-12-07 17:46:48.639487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.392 [2024-12-07 17:46:48.639511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.392 [2024-12-07 17:46:48.639523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:15.392 [2024-12-07 17:46:48.639530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.392 [2024-12-07 17:46:48.639536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.392 [2024-12-07 17:46:48.639552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.392 [2024-12-07 17:46:48.639559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:15.392 [2024-12-07 17:46:48.639565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.392 [2024-12-07 17:46:48.639571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.392 [2024-12-07 17:46:48.639627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.343 ms, result 0 00:30:15.392 true 00:30:15.392 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:15.653 { 00:30:15.653 "name": "ftl", 00:30:15.653 "properties": [ 00:30:15.653 { 00:30:15.653 "name": "superblock_version", 00:30:15.653 "value": 5, 00:30:15.653 "read-only": true 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "name": "base_device", 00:30:15.653 "bands": [ 00:30:15.653 { 00:30:15.653 "id": 0, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 1, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 2, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 3, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 4, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 5, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 6, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 7, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 8, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 9, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 10, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 11, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 12, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 13, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 14, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 15, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 16, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 17, 00:30:15.653 "state": "FREE", 00:30:15.653 "validity": 0.0 00:30:15.653 } 00:30:15.653 ], 00:30:15.653 "read-only": true 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "name": "cache_device", 00:30:15.653 "type": "bdev", 00:30:15.653 "chunks": [ 00:30:15.653 { 00:30:15.653 "id": 0, 00:30:15.653 "state": "INACTIVE", 00:30:15.653 "utilization": 0.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 1, 00:30:15.653 "state": "CLOSED", 00:30:15.653 "utilization": 1.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 2, 00:30:15.653 "state": "CLOSED", 00:30:15.653 "utilization": 1.0 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 3, 00:30:15.653 "state": "OPEN", 00:30:15.653 "utilization": 0.001953125 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "id": 4, 00:30:15.653 "state": "OPEN", 00:30:15.653 "utilization": 0.0 00:30:15.653 } 00:30:15.653 ], 00:30:15.653 "read-only": true 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "name": "verbose_mode", 00:30:15.653 "value": true, 00:30:15.653 "unit": "", 00:30:15.653 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:15.653 }, 00:30:15.653 { 00:30:15.653 "name": "prep_upgrade_on_shutdown", 00:30:15.653 "value": false, 00:30:15.653 "unit": "", 00:30:15.653 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:15.653 } 00:30:15.653 ] 00:30:15.653 } 00:30:15.653 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:15.653 [2024-12-07 17:46:48.947489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.653 [2024-12-07 17:46:48.947520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:15.653 [2024-12-07 17:46:48.947528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.653 [2024-12-07 17:46:48.947534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.653 [2024-12-07 17:46:48.947550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.653 [2024-12-07 17:46:48.947556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:15.653 [2024-12-07 17:46:48.947562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.653 [2024-12-07 17:46:48.947568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.653 [2024-12-07 17:46:48.947582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.653 [2024-12-07 17:46:48.947588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:15.653 [2024-12-07 17:46:48.947594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.653 [2024-12-07 17:46:48.947599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.653 [2024-12-07 17:46:48.947640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.141 ms, result 0 00:30:15.653 true 00:30:15.653 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:15.653 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:15.653 17:46:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:15.912 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:15.912 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:15.912 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:16.173 [2024-12-07 17:46:49.307749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.173 [2024-12-07 17:46:49.307778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.173 [2024-12-07 17:46:49.307786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:16.173 [2024-12-07 17:46:49.307792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.173 [2024-12-07 17:46:49.307807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.173 [2024-12-07 17:46:49.307813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.173 [2024-12-07 17:46:49.307819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.173 [2024-12-07 17:46:49.307824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.173 [2024-12-07 17:46:49.307838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.174 [2024-12-07 17:46:49.307844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.174 [2024-12-07 17:46:49.307849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.174 [2024-12-07 17:46:49.307855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.174 [2024-12-07 17:46:49.307894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.133 ms, result 0 00:30:16.174 true 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.174 { 00:30:16.174 "name": "ftl", 00:30:16.174 "properties": [ 00:30:16.174 { 00:30:16.174 "name": "superblock_version", 00:30:16.174 "value": 5, 00:30:16.174 "read-only": true 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "name": "base_device", 00:30:16.174 "bands": [ 00:30:16.174 { 00:30:16.174 "id": 0, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 1, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 2, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 3, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 4, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 5, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 6, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 7, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 8, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 9, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 10, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 11, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 12, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 13, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 14, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 15, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 16, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 17, 00:30:16.174 "state": "FREE", 00:30:16.174 "validity": 0.0 00:30:16.174 } 00:30:16.174 ], 00:30:16.174 "read-only": true 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "name": "cache_device", 00:30:16.174 "type": "bdev", 00:30:16.174 "chunks": [ 00:30:16.174 { 00:30:16.174 "id": 0, 00:30:16.174 "state": "INACTIVE", 00:30:16.174 "utilization": 0.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 1, 00:30:16.174 "state": "CLOSED", 00:30:16.174 "utilization": 1.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 2, 00:30:16.174 "state": "CLOSED", 00:30:16.174 "utilization": 1.0 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 3, 00:30:16.174 "state": "OPEN", 00:30:16.174 "utilization": 0.001953125 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "id": 4, 00:30:16.174 "state": "OPEN", 00:30:16.174 "utilization": 0.0 00:30:16.174 } 00:30:16.174 ], 00:30:16.174 "read-only": true 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "name": "verbose_mode", 00:30:16.174 "value": true, 00:30:16.174 "unit": "", 00:30:16.174 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.174 }, 00:30:16.174 { 00:30:16.174 "name": "prep_upgrade_on_shutdown", 00:30:16.174 "value": true, 00:30:16.174 "unit": "", 00:30:16.174 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.174 } 00:30:16.174 ] 00:30:16.174 } 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82949 ]] 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82949 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82949 ']' 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82949 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.174 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82949 00:30:16.435 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.435 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.435 killing process with pid 82949 00:30:16.435 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82949' 00:30:16.435 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82949 00:30:16.435 17:46:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82949 00:30:17.002 [2024-12-07 17:46:50.130096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:17.003 [2024-12-07 17:46:50.142336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.003 [2024-12-07 17:46:50.142373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:17.003 [2024-12-07 17:46:50.142384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:17.003 [2024-12-07 17:46:50.142391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.003 [2024-12-07 17:46:50.142410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:17.003 [2024-12-07 17:46:50.144536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.003 [2024-12-07 17:46:50.144563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:17.003 [2024-12-07 17:46:50.144571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.115 ms 00:30:17.003 [2024-12-07 17:46:50.144582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.547533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.547867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:25.124 [2024-12-07 17:46:57.547904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7402.901 ms 00:30:25.124 [2024-12-07 17:46:57.547915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.549563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.549605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:25.124 [2024-12-07 17:46:57.549618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.623 ms 00:30:25.124 [2024-12-07 17:46:57.549628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.550770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.550798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:25.124 [2024-12-07 17:46:57.550811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:30:25.124 [2024-12-07 17:46:57.550827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.562732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.562926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:25.124 [2024-12-07 17:46:57.562949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.864 ms 00:30:25.124 [2024-12-07 17:46:57.562959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.569820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.569950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:25.124 [2024-12-07 17:46:57.569968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.696 ms 00:30:25.124 [2024-12-07 17:46:57.569976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.570325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.570362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:25.124 [2024-12-07 17:46:57.570374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:25.124 [2024-12-07 17:46:57.570384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.580516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.581053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:25.124 [2024-12-07 17:46:57.581120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.116 ms 00:30:25.124 [2024-12-07 17:46:57.581145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.599314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.599347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:25.124 [2024-12-07 17:46:57.599357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.053 ms 00:30:25.124 [2024-12-07 17:46:57.599364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.608908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.609054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:25.124 [2024-12-07 17:46:57.609071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.512 ms 00:30:25.124 [2024-12-07 17:46:57.609078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.618471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.618504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:25.124 [2024-12-07 17:46:57.618513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.284 ms 00:30:25.124 [2024-12-07 17:46:57.618521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.618550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:25.124 [2024-12-07 17:46:57.618574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.124 [2024-12-07 17:46:57.618585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.124 [2024-12-07 17:46:57.618593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:25.124 [2024-12-07 17:46:57.618601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.124 [2024-12-07 17:46:57.618718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:25.124 [2024-12-07 17:46:57.618726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57c8a49a-9a61-44e5-ae38-76b7064ca4ec 00:30:25.124 [2024-12-07 17:46:57.618734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:25.124 [2024-12-07 17:46:57.618741] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:25.124 [2024-12-07 17:46:57.618748] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:25.124 [2024-12-07 17:46:57.618756] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:25.124 [2024-12-07 17:46:57.618765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:25.124 [2024-12-07 17:46:57.618774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:25.124 [2024-12-07 17:46:57.618786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:25.124 [2024-12-07 17:46:57.618792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:25.124 [2024-12-07 17:46:57.618799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:25.124 [2024-12-07 17:46:57.618807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.618816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:25.124 [2024-12-07 17:46:57.618825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:30:25.124 [2024-12-07 17:46:57.618833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.632240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.632271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:25.124 [2024-12-07 17:46:57.632286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.390 ms 00:30:25.124 [2024-12-07 17:46:57.632294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.632657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.124 [2024-12-07 17:46:57.632673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:25.124 [2024-12-07 17:46:57.632683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:30:25.124 [2024-12-07 17:46:57.632691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.678107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.678163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:25.124 [2024-12-07 17:46:57.678174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.678183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.678215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.678224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:25.124 [2024-12-07 17:46:57.678232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.678240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.678315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.678326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:25.124 [2024-12-07 17:46:57.678340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.678348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.678366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.678374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:25.124 [2024-12-07 17:46:57.678382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.678391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.765740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.765802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:25.124 [2024-12-07 17:46:57.765824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.765834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:25.124 [2024-12-07 17:46:57.840326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:25.124 [2024-12-07 17:46:57.840486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:25.124 [2024-12-07 17:46:57.840575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:25.124 [2024-12-07 17:46:57.840721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:25.124 [2024-12-07 17:46:57.840790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:25.124 [2024-12-07 17:46:57.840875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.840885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.840948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.124 [2024-12-07 17:46:57.840961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:25.124 [2024-12-07 17:46:57.840971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.124 [2024-12-07 17:46:57.841017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.124 [2024-12-07 17:46:57.841208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7698.770 ms, result 0 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83493 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83493 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83493 ']' 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.673 17:47:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:27.673 [2024-12-07 17:47:00.804889] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:27.673 [2024-12-07 17:47:00.805057] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83493 ] 00:30:27.673 [2024-12-07 17:47:00.966843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.935 [2024-12-07 17:47:01.104038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.881 [2024-12-07 17:47:02.000926] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:28.881 [2024-12-07 17:47:02.001041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:28.881 [2024-12-07 17:47:02.161109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.161168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:28.881 [2024-12-07 17:47:02.161184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:28.881 [2024-12-07 17:47:02.161194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.161261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.161273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:28.881 [2024-12-07 17:47:02.161282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:28.881 [2024-12-07 17:47:02.161291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.161320] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:28.881 [2024-12-07 17:47:02.162189] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:28.881 [2024-12-07 17:47:02.162338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.162351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:28.881 [2024-12-07 17:47:02.162362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.028 ms 00:30:28.881 [2024-12-07 17:47:02.162377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.164911] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:28.881 [2024-12-07 17:47:02.180731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.180788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:28.881 [2024-12-07 17:47:02.180810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.822 ms 00:30:28.881 [2024-12-07 17:47:02.180819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.180915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.180928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:28.881 [2024-12-07 17:47:02.180939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:28.881 [2024-12-07 17:47:02.180948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.192791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.193107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:28.881 [2024-12-07 17:47:02.193132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.730 ms 00:30:28.881 [2024-12-07 17:47:02.193141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.193229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.193242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:28.881 [2024-12-07 17:47:02.193251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:28.881 [2024-12-07 17:47:02.193260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.193324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.193340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:28.881 [2024-12-07 17:47:02.193370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:28.881 [2024-12-07 17:47:02.193380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.881 [2024-12-07 17:47:02.193409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:28.881 [2024-12-07 17:47:02.198061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.881 [2024-12-07 17:47:02.198103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:28.881 [2024-12-07 17:47:02.198115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.659 ms 00:30:28.881 [2024-12-07 17:47:02.198130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.882 [2024-12-07 17:47:02.198168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.882 [2024-12-07 17:47:02.198178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:28.882 [2024-12-07 17:47:02.198189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:28.882 [2024-12-07 17:47:02.198198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.882 [2024-12-07 17:47:02.198244] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:28.882 [2024-12-07 17:47:02.198277] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:28.882 [2024-12-07 17:47:02.198319] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:28.882 [2024-12-07 17:47:02.198338] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:28.882 [2024-12-07 17:47:02.198451] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:28.882 [2024-12-07 17:47:02.198472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:28.882 [2024-12-07 17:47:02.198484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:28.882 [2024-12-07 17:47:02.198496] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:28.882 [2024-12-07 17:47:02.198506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:28.882 [2024-12-07 17:47:02.198520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:28.882 [2024-12-07 17:47:02.198531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:28.882 [2024-12-07 17:47:02.198540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:28.882 [2024-12-07 17:47:02.198549] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:28.882 [2024-12-07 17:47:02.198559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.882 [2024-12-07 17:47:02.198568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:28.882 [2024-12-07 17:47:02.198577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:30:28.882 [2024-12-07 17:47:02.198585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.882 [2024-12-07 17:47:02.198671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.882 [2024-12-07 17:47:02.198690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:28.882 [2024-12-07 17:47:02.198703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:30:28.882 [2024-12-07 17:47:02.198711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.882 [2024-12-07 17:47:02.198818] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:28.882 [2024-12-07 17:47:02.198831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:28.882 [2024-12-07 17:47:02.198841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:28.882 [2024-12-07 17:47:02.198850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:28.882 [2024-12-07 17:47:02.198866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:28.882 [2024-12-07 17:47:02.198885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:28.882 [2024-12-07 17:47:02.198893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:28.882 [2024-12-07 17:47:02.198900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:28.882 [2024-12-07 17:47:02.198914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:28.882 [2024-12-07 17:47:02.198921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:28.882 [2024-12-07 17:47:02.198941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:28.882 [2024-12-07 17:47:02.198947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:28.882 [2024-12-07 17:47:02.198961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:28.882 [2024-12-07 17:47:02.198969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.198976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:28.882 [2024-12-07 17:47:02.199000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:28.882 [2024-12-07 17:47:02.199033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:28.882 [2024-12-07 17:47:02.199053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:28.882 [2024-12-07 17:47:02.199078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:28.882 [2024-12-07 17:47:02.199102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:28.882 [2024-12-07 17:47:02.199125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:28.882 [2024-12-07 17:47:02.199147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:28.882 [2024-12-07 17:47:02.199170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:28.882 [2024-12-07 17:47:02.199178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:28.882 [2024-12-07 17:47:02.199194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:28.882 [2024-12-07 17:47:02.199209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:28.882 [2024-12-07 17:47:02.199228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:28.882 [2024-12-07 17:47:02.199235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:28.882 [2024-12-07 17:47:02.199243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:28.882 [2024-12-07 17:47:02.199250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:28.882 [2024-12-07 17:47:02.199257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:28.882 [2024-12-07 17:47:02.199265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:28.882 [2024-12-07 17:47:02.199274] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:28.882 [2024-12-07 17:47:02.199284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:28.882 [2024-12-07 17:47:02.199300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:28.882 [2024-12-07 17:47:02.199323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:28.882 [2024-12-07 17:47:02.199330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:28.882 [2024-12-07 17:47:02.199338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:28.882 [2024-12-07 17:47:02.199345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:28.882 [2024-12-07 17:47:02.199399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:28.882 [2024-12-07 17:47:02.199408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:28.882 [2024-12-07 17:47:02.199425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:28.883 [2024-12-07 17:47:02.199432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:28.883 [2024-12-07 17:47:02.199439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:28.883 [2024-12-07 17:47:02.199446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.883 [2024-12-07 17:47:02.199454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:28.883 [2024-12-07 17:47:02.199472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.698 ms 00:30:28.883 [2024-12-07 17:47:02.199480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.883 [2024-12-07 17:47:02.199525] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:28.883 [2024-12-07 17:47:02.199538] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:33.091 [2024-12-07 17:47:06.161522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.161583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:33.091 [2024-12-07 17:47:06.161598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3961.982 ms 00:30:33.091 [2024-12-07 17:47:06.161606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.185302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.185344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:33.091 [2024-12-07 17:47:06.185359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.532 ms 00:30:33.091 [2024-12-07 17:47:06.185367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.185421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.185434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:33.091 [2024-12-07 17:47:06.185442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:33.091 [2024-12-07 17:47:06.185449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.211922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.211957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:33.091 [2024-12-07 17:47:06.211968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.442 ms 00:30:33.091 [2024-12-07 17:47:06.211975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.212012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.212020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:33.091 [2024-12-07 17:47:06.212026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.091 [2024-12-07 17:47:06.212033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.212446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.212462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:33.091 [2024-12-07 17:47:06.212469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:30:33.091 [2024-12-07 17:47:06.212476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.091 [2024-12-07 17:47:06.212514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.091 [2024-12-07 17:47:06.212522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:33.091 [2024-12-07 17:47:06.212528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:33.091 [2024-12-07 17:47:06.212535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.225812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.225839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:33.092 [2024-12-07 17:47:06.225847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.260 ms 00:30:33.092 [2024-12-07 17:47:06.225853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.249963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:33.092 [2024-12-07 17:47:06.250197] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:33.092 [2024-12-07 17:47:06.250213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.250221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:33.092 [2024-12-07 17:47:06.250228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.278 ms 00:30:33.092 [2024-12-07 17:47:06.250234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.261079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.261186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:33.092 [2024-12-07 17:47:06.261200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.821 ms 00:30:33.092 [2024-12-07 17:47:06.261207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.270036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.270060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:33.092 [2024-12-07 17:47:06.270068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.719 ms 00:30:33.092 [2024-12-07 17:47:06.270075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.279285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.279308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:33.092 [2024-12-07 17:47:06.279315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.194 ms 00:30:33.092 [2024-12-07 17:47:06.279322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.279775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.279791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:33.092 [2024-12-07 17:47:06.279798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:30:33.092 [2024-12-07 17:47:06.279804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.327113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.327152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:33.092 [2024-12-07 17:47:06.327163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.295 ms 00:30:33.092 [2024-12-07 17:47:06.327169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.335481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:33.092 [2024-12-07 17:47:06.336263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.336391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:33.092 [2024-12-07 17:47:06.336404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.053 ms 00:30:33.092 [2024-12-07 17:47:06.336410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.336475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.336485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:33.092 [2024-12-07 17:47:06.336493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:33.092 [2024-12-07 17:47:06.336500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.336539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.336548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:33.092 [2024-12-07 17:47:06.336555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:33.092 [2024-12-07 17:47:06.336561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.336578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.336585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:33.092 [2024-12-07 17:47:06.336595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:33.092 [2024-12-07 17:47:06.336601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.336629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:33.092 [2024-12-07 17:47:06.336638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.336645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:33.092 [2024-12-07 17:47:06.336651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:33.092 [2024-12-07 17:47:06.336659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.355204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.355320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:33.092 [2024-12-07 17:47:06.355333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.531 ms 00:30:33.092 [2024-12-07 17:47:06.355339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.355395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.092 [2024-12-07 17:47:06.355403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:33.092 [2024-12-07 17:47:06.355410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:33.092 [2024-12-07 17:47:06.355416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.092 [2024-12-07 17:47:06.356350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4194.820 ms, result 0 00:30:33.092 [2024-12-07 17:47:06.371585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.092 [2024-12-07 17:47:06.387587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:33.092 [2024-12-07 17:47:06.395714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:33.660 17:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.660 17:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:33.660 17:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:33.660 17:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:33.660 17:47:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:33.660 [2024-12-07 17:47:07.000022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.660 [2024-12-07 17:47:07.000054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:33.660 [2024-12-07 17:47:07.000066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:33.660 [2024-12-07 17:47:07.000073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.660 [2024-12-07 17:47:07.000090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.660 [2024-12-07 17:47:07.000098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:33.660 [2024-12-07 17:47:07.000106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.660 [2024-12-07 17:47:07.000112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.660 [2024-12-07 17:47:07.000127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.660 [2024-12-07 17:47:07.000134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:33.660 [2024-12-07 17:47:07.000141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:33.660 [2024-12-07 17:47:07.000147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.660 [2024-12-07 17:47:07.000190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:30:33.660 true 00:30:33.660 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:33.919 { 00:30:33.919 "name": "ftl", 00:30:33.919 "properties": [ 00:30:33.919 { 00:30:33.919 "name": "superblock_version", 00:30:33.919 "value": 5, 00:30:33.919 "read-only": true 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "name": "base_device", 00:30:33.919 "bands": [ 00:30:33.919 { 00:30:33.919 "id": 0, 00:30:33.919 "state": "CLOSED", 00:30:33.919 "validity": 1.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 1, 00:30:33.919 "state": "CLOSED", 00:30:33.919 "validity": 1.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 2, 00:30:33.919 "state": "CLOSED", 00:30:33.919 "validity": 0.007843137254901933 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 3, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 4, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 5, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 6, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 7, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 8, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 9, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 10, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 11, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 12, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 13, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 14, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 15, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 16, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 17, 00:30:33.919 "state": "FREE", 00:30:33.919 "validity": 0.0 00:30:33.919 } 00:30:33.919 ], 00:30:33.919 "read-only": true 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "name": "cache_device", 00:30:33.919 "type": "bdev", 00:30:33.919 "chunks": [ 00:30:33.919 { 00:30:33.919 "id": 0, 00:30:33.919 "state": "INACTIVE", 00:30:33.919 "utilization": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 1, 00:30:33.919 "state": "OPEN", 00:30:33.919 "utilization": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 2, 00:30:33.919 "state": "OPEN", 00:30:33.919 "utilization": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 3, 00:30:33.919 "state": "FREE", 00:30:33.919 "utilization": 0.0 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "id": 4, 00:30:33.919 "state": "FREE", 00:30:33.919 "utilization": 0.0 00:30:33.919 } 00:30:33.919 ], 00:30:33.919 "read-only": true 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "name": "verbose_mode", 00:30:33.919 "value": true, 00:30:33.919 "unit": "", 00:30:33.919 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:33.919 }, 00:30:33.919 { 00:30:33.919 "name": "prep_upgrade_on_shutdown", 00:30:33.919 "value": false, 00:30:33.919 "unit": "", 00:30:33.919 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:33.919 } 00:30:33.919 ] 00:30:33.919 } 00:30:33.919 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:33.919 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:33.919 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.178 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:34.178 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:34.178 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:34.178 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:34.178 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.436 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:34.437 Validate MD5 checksum, iteration 1 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:34.437 17:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:34.437 [2024-12-07 17:47:07.691253] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:34.437 [2024-12-07 17:47:07.691540] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83586 ] 00:30:34.694 [2024-12-07 17:47:07.851618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.694 [2024-12-07 17:47:07.944128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.077  [2024-12-07T17:47:10.097Z] Copying: 648/1024 [MB] (648 MBps) [2024-12-07T17:47:11.475Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:30:38.093 00:30:38.093 17:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:38.093 17:47:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=29e19a4751922585703ce0fe809f177c 00:30:40.010 Validate MD5 checksum, iteration 2 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 29e19a4751922585703ce0fe809f177c != \2\9\e\1\9\a\4\7\5\1\9\2\2\5\8\5\7\0\3\c\e\0\f\e\8\0\9\f\1\7\7\c ]] 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:40.010 17:47:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:40.010 [2024-12-07 17:47:13.353023] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:40.010 [2024-12-07 17:47:13.353284] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83647 ] 00:30:40.272 [2024-12-07 17:47:13.512547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.272 [2024-12-07 17:47:13.607766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.190  [2024-12-07T17:47:16.139Z] Copying: 553/1024 [MB] (553 MBps) [2024-12-07T17:47:18.051Z] Copying: 1024/1024 [MB] (average 547 MBps) 00:30:44.670 00:30:44.670 17:47:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:44.670 17:47:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd945eaefdae6b992cf664860f4de993 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd945eaefdae6b992cf664860f4de993 != \f\d\9\4\5\e\a\e\f\d\a\e\6\b\9\9\2\c\f\6\6\4\8\6\0\f\4\d\e\9\9\3 ]] 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83493 ]] 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83493 00:30:46.582 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83741 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83741 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83741 ']' 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.583 17:47:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:46.583 [2024-12-07 17:47:19.845886] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:46.583 [2024-12-07 17:47:19.846135] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83741 ] 00:30:46.583 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83493 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:46.844 [2024-12-07 17:47:20.000108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.844 [2024-12-07 17:47:20.103927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.414 [2024-12-07 17:47:20.738069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:47.414 [2024-12-07 17:47:20.738294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:47.675 [2024-12-07 17:47:20.886691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.886828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:47.675 [2024-12-07 17:47:20.886846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:47.675 [2024-12-07 17:47:20.886854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.886905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.886914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:47.675 [2024-12-07 17:47:20.886921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:47.675 [2024-12-07 17:47:20.886927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.886948] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:47.675 [2024-12-07 17:47:20.887542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:47.675 [2024-12-07 17:47:20.887559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.887566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:47.675 [2024-12-07 17:47:20.887574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:30:47.675 [2024-12-07 17:47:20.887581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.887817] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:47.675 [2024-12-07 17:47:20.901603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.901720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:47.675 [2024-12-07 17:47:20.901737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.787 ms 00:30:47.675 [2024-12-07 17:47:20.901744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.908871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.908952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:47.675 [2024-12-07 17:47:20.909012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:47.675 [2024-12-07 17:47:20.909031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.909298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.909626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:47.675 [2024-12-07 17:47:20.909699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:47.675 [2024-12-07 17:47:20.909719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.909788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.909809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:47.675 [2024-12-07 17:47:20.909825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:47.675 [2024-12-07 17:47:20.909840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.909912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.909932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:47.675 [2024-12-07 17:47:20.909949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:47.675 [2024-12-07 17:47:20.909965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.910005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:47.675 [2024-12-07 17:47:20.912439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.912532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:47.675 [2024-12-07 17:47:20.912581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.439 ms 00:30:47.675 [2024-12-07 17:47:20.912598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.912635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.912651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:47.675 [2024-12-07 17:47:20.912666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:47.675 [2024-12-07 17:47:20.912712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.912742] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:47.675 [2024-12-07 17:47:20.912770] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:47.675 [2024-12-07 17:47:20.912814] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:47.675 [2024-12-07 17:47:20.912876] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:47.675 [2024-12-07 17:47:20.912989] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:47.675 [2024-12-07 17:47:20.913077] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:47.675 [2024-12-07 17:47:20.913104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:47.675 [2024-12-07 17:47:20.913128] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:47.675 [2024-12-07 17:47:20.913152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:47.675 [2024-12-07 17:47:20.913213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:47.675 [2024-12-07 17:47:20.913230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:47.675 [2024-12-07 17:47:20.913244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:47.675 [2024-12-07 17:47:20.913259] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:47.675 [2024-12-07 17:47:20.913279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.913293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:47.675 [2024-12-07 17:47:20.913308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:30:47.675 [2024-12-07 17:47:20.913323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.913412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.675 [2024-12-07 17:47:20.913431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:47.675 [2024-12-07 17:47:20.913446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:47.675 [2024-12-07 17:47:20.913502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.675 [2024-12-07 17:47:20.913608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:47.675 [2024-12-07 17:47:20.913633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:47.675 [2024-12-07 17:47:20.913649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:47.675 [2024-12-07 17:47:20.913697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.913713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:47.675 [2024-12-07 17:47:20.913727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.913742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:47.675 [2024-12-07 17:47:20.913775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:47.675 [2024-12-07 17:47:20.913792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:47.675 [2024-12-07 17:47:20.913807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.913821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:47.675 [2024-12-07 17:47:20.913834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:47.675 [2024-12-07 17:47:20.913873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.913891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:47.675 [2024-12-07 17:47:20.913905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:47.675 [2024-12-07 17:47:20.913918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.913932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:47.675 [2024-12-07 17:47:20.913947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:47.675 [2024-12-07 17:47:20.913989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.675 [2024-12-07 17:47:20.914007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:47.675 [2024-12-07 17:47:20.914022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:47.675 [2024-12-07 17:47:20.914042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.675 [2024-12-07 17:47:20.914089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:47.675 [2024-12-07 17:47:20.914106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:47.675 [2024-12-07 17:47:20.914122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.675 [2024-12-07 17:47:20.914138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:47.675 [2024-12-07 17:47:20.914153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:47.675 [2024-12-07 17:47:20.914185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.675 [2024-12-07 17:47:20.914242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:47.675 [2024-12-07 17:47:20.914259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:47.675 [2024-12-07 17:47:20.914303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:47.675 [2024-12-07 17:47:20.914319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:47.675 [2024-12-07 17:47:20.914334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:47.675 [2024-12-07 17:47:20.914349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:47.676 [2024-12-07 17:47:20.914376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:47.676 [2024-12-07 17:47:20.914390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:47.676 [2024-12-07 17:47:20.914445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:47.676 [2024-12-07 17:47:20.914489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:47.676 [2024-12-07 17:47:20.914503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914517] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:47.676 [2024-12-07 17:47:20.914532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:47.676 [2024-12-07 17:47:20.914547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:47.676 [2024-12-07 17:47:20.914581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:47.676 [2024-12-07 17:47:20.914598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:47.676 [2024-12-07 17:47:20.914612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:47.676 [2024-12-07 17:47:20.914626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:47.676 [2024-12-07 17:47:20.914722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:47.676 [2024-12-07 17:47:20.914738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:47.676 [2024-12-07 17:47:20.914752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:47.676 [2024-12-07 17:47:20.914769] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:47.676 [2024-12-07 17:47:20.914793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.914816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:47.676 [2024-12-07 17:47:20.914839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.914886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.914909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:47.676 [2024-12-07 17:47:20.914930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:47.676 [2024-12-07 17:47:20.914952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:47.676 [2024-12-07 17:47:20.914973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:47.676 [2024-12-07 17:47:20.915020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:47.676 [2024-12-07 17:47:20.915258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:47.676 [2024-12-07 17:47:20.915282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:47.676 [2024-12-07 17:47:20.915331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:47.676 [2024-12-07 17:47:20.915352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:47.676 [2024-12-07 17:47:20.915374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:47.676 [2024-12-07 17:47:20.915434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.915451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:47.676 [2024-12-07 17:47:20.915466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.879 ms 00:30:47.676 [2024-12-07 17:47:20.915481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.936815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.936959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:47.676 [2024-12-07 17:47:20.937012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.280 ms 00:30:47.676 [2024-12-07 17:47:20.937030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.937073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.937089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:47.676 [2024-12-07 17:47:20.937105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:47.676 [2024-12-07 17:47:20.937120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.963473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.963567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:47.676 [2024-12-07 17:47:20.963606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.298 ms 00:30:47.676 [2024-12-07 17:47:20.963623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.963662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.963678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:47.676 [2024-12-07 17:47:20.963694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:47.676 [2024-12-07 17:47:20.963712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.963797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.963818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:47.676 [2024-12-07 17:47:20.963875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:47.676 [2024-12-07 17:47:20.963893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.963940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.963957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:47.676 [2024-12-07 17:47:20.963973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:47.676 [2024-12-07 17:47:20.963998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.977270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.977360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:47.676 [2024-12-07 17:47:20.977401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.241 ms 00:30:47.676 [2024-12-07 17:47:20.977418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:20.977514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:20.977535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:47.676 [2024-12-07 17:47:20.977552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:47.676 [2024-12-07 17:47:20.977567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:21.006851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:21.006973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:47.676 [2024-12-07 17:47:21.007030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.260 ms 00:30:47.676 [2024-12-07 17:47:21.007050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.676 [2024-12-07 17:47:21.014406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.676 [2024-12-07 17:47:21.014491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:47.676 [2024-12-07 17:47:21.014543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:30:47.676 [2024-12-07 17:47:21.014561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.062020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.062153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:47.937 [2024-12-07 17:47:21.062168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.406 ms 00:30:47.937 [2024-12-07 17:47:21.062176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.062299] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:47.937 [2024-12-07 17:47:21.062403] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:47.937 [2024-12-07 17:47:21.062503] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:47.937 [2024-12-07 17:47:21.062602] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:47.937 [2024-12-07 17:47:21.062610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.062618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:47.937 [2024-12-07 17:47:21.062626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:30:47.937 [2024-12-07 17:47:21.062633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.062680] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:47.937 [2024-12-07 17:47:21.062690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.062701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:47.937 [2024-12-07 17:47:21.062708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:47.937 [2024-12-07 17:47:21.062714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.075084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.075116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:47.937 [2024-12-07 17:47:21.075125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.353 ms 00:30:47.937 [2024-12-07 17:47:21.075132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.081565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.081666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:47.937 [2024-12-07 17:47:21.081678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:47.937 [2024-12-07 17:47:21.081685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:47.937 [2024-12-07 17:47:21.081754] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:47.937 [2024-12-07 17:47:21.081905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:47.937 [2024-12-07 17:47:21.081915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:47.937 [2024-12-07 17:47:21.081923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.152 ms 00:30:47.937 [2024-12-07 17:47:21.081929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.507 [2024-12-07 17:47:21.589801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.507 [2024-12-07 17:47:21.589864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:48.507 [2024-12-07 17:47:21.589877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 507.188 ms 00:30:48.507 [2024-12-07 17:47:21.589884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.507 [2024-12-07 17:47:21.593362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.507 [2024-12-07 17:47:21.593392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:48.507 [2024-12-07 17:47:21.593401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.121 ms 00:30:48.507 [2024-12-07 17:47:21.593408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.507 [2024-12-07 17:47:21.594475] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:48.507 [2024-12-07 17:47:21.594505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.507 [2024-12-07 17:47:21.594513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:48.507 [2024-12-07 17:47:21.594520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:30:48.507 [2024-12-07 17:47:21.594526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.507 [2024-12-07 17:47:21.594552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.507 [2024-12-07 17:47:21.594560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:48.507 [2024-12-07 17:47:21.594567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:48.507 [2024-12-07 17:47:21.594578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.507 [2024-12-07 17:47:21.594604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 512.850 ms, result 0 00:30:48.507 [2024-12-07 17:47:21.594636] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:48.507 [2024-12-07 17:47:21.594802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.507 [2024-12-07 17:47:21.594818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:48.507 [2024-12-07 17:47:21.594824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.167 ms 00:30:48.507 [2024-12-07 17:47:21.594830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.227953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.228083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:49.080 [2024-12-07 17:47:22.228124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 632.314 ms 00:30:49.080 [2024-12-07 17:47:22.228134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.233959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.234156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:49.080 [2024-12-07 17:47:22.234234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.996 ms 00:30:49.080 [2024-12-07 17:47:22.234260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.235501] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:49.080 [2024-12-07 17:47:22.235593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.235692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:49.080 [2024-12-07 17:47:22.235718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:30:49.080 [2024-12-07 17:47:22.235738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.235837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.235863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:49.080 [2024-12-07 17:47:22.235885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:49.080 [2024-12-07 17:47:22.235907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.236063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 641.392 ms, result 0 00:30:49.080 [2024-12-07 17:47:22.236155] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:49.080 [2024-12-07 17:47:22.236193] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:49.080 [2024-12-07 17:47:22.236226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.236250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:49.080 [2024-12-07 17:47:22.236365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1154.488 ms 00:30:49.080 [2024-12-07 17:47:22.236389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.236439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.236474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:49.080 [2024-12-07 17:47:22.236496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:49.080 [2024-12-07 17:47:22.236516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.250804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:49.080 [2024-12-07 17:47:22.251089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.251128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:49.080 [2024-12-07 17:47:22.251349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.554 ms 00:30:49.080 [2024-12-07 17:47:22.251374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.252161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.252289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:49.080 [2024-12-07 17:47:22.252312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.690 ms 00:30:49.080 [2024-12-07 17:47:22.252321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.254590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.080 [2024-12-07 17:47:22.254623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:49.080 [2024-12-07 17:47:22.254635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.244 ms 00:30:49.080 [2024-12-07 17:47:22.254644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.080 [2024-12-07 17:47:22.254692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.081 [2024-12-07 17:47:22.254703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:49.081 [2024-12-07 17:47:22.254714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.081 [2024-12-07 17:47:22.254730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.081 [2024-12-07 17:47:22.254855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.081 [2024-12-07 17:47:22.254868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:49.081 [2024-12-07 17:47:22.254878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:30:49.081 [2024-12-07 17:47:22.254887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.081 [2024-12-07 17:47:22.254912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.081 [2024-12-07 17:47:22.254921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:49.081 [2024-12-07 17:47:22.254931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:49.081 [2024-12-07 17:47:22.254940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.081 [2024-12-07 17:47:22.255005] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:49.081 [2024-12-07 17:47:22.255018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.081 [2024-12-07 17:47:22.255027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:49.081 [2024-12-07 17:47:22.255036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:49.081 [2024-12-07 17:47:22.255044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.081 [2024-12-07 17:47:22.255100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.081 [2024-12-07 17:47:22.255111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:49.081 [2024-12-07 17:47:22.255120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:49.081 [2024-12-07 17:47:22.255131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.081 [2024-12-07 17:47:22.256745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1369.458 ms, result 0 00:30:49.081 [2024-12-07 17:47:22.271999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.081 [2024-12-07 17:47:22.288016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:49.081 [2024-12-07 17:47:22.297769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:49.081 Validate MD5 checksum, iteration 1 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:49.081 17:47:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:49.081 [2024-12-07 17:47:22.427749] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:49.081 [2024-12-07 17:47:22.428098] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83779 ] 00:30:49.343 [2024-12-07 17:47:22.595234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.343 [2024-12-07 17:47:22.717145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.260  [2024-12-07T17:47:25.214Z] Copying: 580/1024 [MB] (580 MBps) [2024-12-07T17:47:26.152Z] Copying: 1024/1024 [MB] (average 599 MBps) 00:30:52.770 00:30:52.770 17:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:52.770 17:47:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=29e19a4751922585703ce0fe809f177c 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 29e19a4751922585703ce0fe809f177c != \2\9\e\1\9\a\4\7\5\1\9\2\2\5\8\5\7\0\3\c\e\0\f\e\8\0\9\f\1\7\7\c ]] 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:55.314 Validate MD5 checksum, iteration 2 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:55.314 17:47:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:55.314 [2024-12-07 17:47:28.199960] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:30:55.315 [2024-12-07 17:47:28.200090] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83846 ] 00:30:55.315 [2024-12-07 17:47:28.355287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.315 [2024-12-07 17:47:28.429773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.691  [2024-12-07T17:47:30.642Z] Copying: 654/1024 [MB] (654 MBps) [2024-12-07T17:47:31.213Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:30:57.831 00:30:57.831 17:47:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:57.831 17:47:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd945eaefdae6b992cf664860f4de993 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd945eaefdae6b992cf664860f4de993 != \f\d\9\4\5\e\a\e\f\d\a\e\6\b\9\9\2\c\f\6\6\4\8\6\0\f\4\d\e\9\9\3 ]] 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83741 ]] 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83741 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83741 ']' 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83741 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83741 00:31:00.376 killing process with pid 83741 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83741' 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83741 00:31:00.376 17:47:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83741 00:31:00.638 [2024-12-07 17:47:33.922716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:00.638 [2024-12-07 17:47:33.935306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.935342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:00.638 [2024-12-07 17:47:33.935354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:00.638 [2024-12-07 17:47:33.935361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.935379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:00.638 [2024-12-07 17:47:33.937676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.937706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:00.638 [2024-12-07 17:47:33.937714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.286 ms 00:31:00.638 [2024-12-07 17:47:33.937721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.937910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.937919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:00.638 [2024-12-07 17:47:33.937926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:31:00.638 [2024-12-07 17:47:33.937932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.939383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.939406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:00.638 [2024-12-07 17:47:33.939414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.439 ms 00:31:00.638 [2024-12-07 17:47:33.939424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.940294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.940312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:00.638 [2024-12-07 17:47:33.940319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:31:00.638 [2024-12-07 17:47:33.940326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.948729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.948754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:00.638 [2024-12-07 17:47:33.948766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.367 ms 00:31:00.638 [2024-12-07 17:47:33.948773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.953262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.953287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:00.638 [2024-12-07 17:47:33.953295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.463 ms 00:31:00.638 [2024-12-07 17:47:33.953303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.953381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.953390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:00.638 [2024-12-07 17:47:33.953397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:31:00.638 [2024-12-07 17:47:33.953407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.961271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.961294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:00.638 [2024-12-07 17:47:33.961301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.850 ms 00:31:00.638 [2024-12-07 17:47:33.961307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.969329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.969358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:00.638 [2024-12-07 17:47:33.969365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.997 ms 00:31:00.638 [2024-12-07 17:47:33.969371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.977106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.977130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:00.638 [2024-12-07 17:47:33.977137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.710 ms 00:31:00.638 [2024-12-07 17:47:33.977143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.984700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.984859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:00.638 [2024-12-07 17:47:33.984871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.512 ms 00:31:00.638 [2024-12-07 17:47:33.984877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.984900] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:00.638 [2024-12-07 17:47:33.984912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:00.638 [2024-12-07 17:47:33.984920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:00.638 [2024-12-07 17:47:33.984926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:00.638 [2024-12-07 17:47:33.984933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.984998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:00.638 [2024-12-07 17:47:33.985036] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:00.638 [2024-12-07 17:47:33.985042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 57c8a49a-9a61-44e5-ae38-76b7064ca4ec 00:31:00.638 [2024-12-07 17:47:33.985049] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:00.638 [2024-12-07 17:47:33.985055] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:00.638 [2024-12-07 17:47:33.985061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:00.638 [2024-12-07 17:47:33.985067] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:00.638 [2024-12-07 17:47:33.985073] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:00.638 [2024-12-07 17:47:33.985080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:00.638 [2024-12-07 17:47:33.985090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:00.638 [2024-12-07 17:47:33.985095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:00.638 [2024-12-07 17:47:33.985101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:00.638 [2024-12-07 17:47:33.985108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.985116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:00.638 [2024-12-07 17:47:33.985123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:31:00.638 [2024-12-07 17:47:33.985129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.995156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.638 [2024-12-07 17:47:33.995262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:00.638 [2024-12-07 17:47:33.995276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.003 ms 00:31:00.638 [2024-12-07 17:47:33.995282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.638 [2024-12-07 17:47:33.995575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.639 [2024-12-07 17:47:33.995583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:00.639 [2024-12-07 17:47:33.995590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:31:00.639 [2024-12-07 17:47:33.995596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.030662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.030761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:00.900 [2024-12-07 17:47:34.030773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.030784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.030809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.030817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:00.900 [2024-12-07 17:47:34.030823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.030829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.030898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.030906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:00.900 [2024-12-07 17:47:34.030913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.030919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.030935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.030941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:00.900 [2024-12-07 17:47:34.030948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.030954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.094376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.094412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:00.900 [2024-12-07 17:47:34.094421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.094427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.145876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.146021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:00.900 [2024-12-07 17:47:34.146034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.900 [2024-12-07 17:47:34.146041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.900 [2024-12-07 17:47:34.146111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.900 [2024-12-07 17:47:34.146119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:00.901 [2024-12-07 17:47:34.146126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.901 [2024-12-07 17:47:34.146202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:00.901 [2024-12-07 17:47:34.146208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.901 [2024-12-07 17:47:34.146303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:00.901 [2024-12-07 17:47:34.146310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.901 [2024-12-07 17:47:34.146351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:00.901 [2024-12-07 17:47:34.146360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.901 [2024-12-07 17:47:34.146409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:00.901 [2024-12-07 17:47:34.146415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:00.901 [2024-12-07 17:47:34.146469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:00.901 [2024-12-07 17:47:34.146475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:00.901 [2024-12-07 17:47:34.146482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.901 [2024-12-07 17:47:34.146587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 211.253 ms, result 0 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.491 Remove shared memory files 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83493 00:31:01.491 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:01.751 17:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:01.751 ************************************ 00:31:01.751 END TEST ftl_upgrade_shutdown 00:31:01.751 ************************************ 00:31:01.751 00:31:01.751 real 1m18.947s 00:31:01.751 user 1m49.061s 00:31:01.751 sys 0m19.521s 00:31:01.751 17:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.751 17:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 17:47:34 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:01.751 17:47:34 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:01.751 17:47:34 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:01.751 17:47:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.751 17:47:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 ************************************ 00:31:01.751 START TEST ftl_restore_fast 00:31:01.751 ************************************ 00:31:01.751 17:47:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:01.751 * Looking for test storage... 00:31:01.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.751 17:47:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:01.751 17:47:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:01.751 17:47:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.751 --rc genhtml_branch_coverage=1 00:31:01.751 --rc genhtml_function_coverage=1 00:31:01.751 --rc genhtml_legend=1 00:31:01.751 --rc geninfo_all_blocks=1 00:31:01.751 --rc geninfo_unexecuted_blocks=1 00:31:01.751 00:31:01.751 ' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.751 --rc genhtml_branch_coverage=1 00:31:01.751 --rc genhtml_function_coverage=1 00:31:01.751 --rc genhtml_legend=1 00:31:01.751 --rc geninfo_all_blocks=1 00:31:01.751 --rc geninfo_unexecuted_blocks=1 00:31:01.751 00:31:01.751 ' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.751 --rc genhtml_branch_coverage=1 00:31:01.751 --rc genhtml_function_coverage=1 00:31:01.751 --rc genhtml_legend=1 00:31:01.751 --rc geninfo_all_blocks=1 00:31:01.751 --rc geninfo_unexecuted_blocks=1 00:31:01.751 00:31:01.751 ' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.751 --rc genhtml_branch_coverage=1 00:31:01.751 --rc genhtml_function_coverage=1 00:31:01.751 --rc genhtml_legend=1 00:31:01.751 --rc geninfo_all_blocks=1 00:31:01.751 --rc geninfo_unexecuted_blocks=1 00:31:01.751 00:31:01.751 ' 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:01.751 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:01.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.PedxWAw03s 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83991 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83991 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83991 ']' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.752 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:02.011 [2024-12-07 17:47:35.156215] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:02.011 [2024-12-07 17:47:35.156331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83991 ] 00:31:02.011 [2024-12-07 17:47:35.312887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.270 [2024-12-07 17:47:35.401946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:02.839 17:47:35 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:02.839 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:03.097 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:03.097 { 00:31:03.097 "name": "nvme0n1", 00:31:03.097 "aliases": [ 00:31:03.097 "d7908824-be99-4928-93f9-a96a3ae18369" 00:31:03.097 ], 00:31:03.097 "product_name": "NVMe disk", 00:31:03.097 "block_size": 4096, 00:31:03.097 "num_blocks": 1310720, 00:31:03.097 "uuid": "d7908824-be99-4928-93f9-a96a3ae18369", 00:31:03.097 "numa_id": -1, 00:31:03.097 "assigned_rate_limits": { 00:31:03.097 "rw_ios_per_sec": 0, 00:31:03.097 "rw_mbytes_per_sec": 0, 00:31:03.097 "r_mbytes_per_sec": 0, 00:31:03.097 "w_mbytes_per_sec": 0 00:31:03.097 }, 00:31:03.097 "claimed": true, 00:31:03.097 "claim_type": "read_many_write_one", 00:31:03.097 "zoned": false, 00:31:03.097 "supported_io_types": { 00:31:03.097 "read": true, 00:31:03.097 "write": true, 00:31:03.097 "unmap": true, 00:31:03.097 "flush": true, 00:31:03.097 "reset": true, 00:31:03.097 "nvme_admin": true, 00:31:03.097 "nvme_io": true, 00:31:03.097 "nvme_io_md": false, 00:31:03.097 "write_zeroes": true, 00:31:03.097 "zcopy": false, 00:31:03.097 "get_zone_info": false, 00:31:03.097 "zone_management": false, 00:31:03.097 "zone_append": false, 00:31:03.097 "compare": true, 00:31:03.097 "compare_and_write": false, 00:31:03.097 "abort": true, 00:31:03.097 "seek_hole": false, 00:31:03.097 "seek_data": false, 00:31:03.097 "copy": true, 00:31:03.097 "nvme_iov_md": false 00:31:03.097 }, 00:31:03.097 "driver_specific": { 00:31:03.097 "nvme": [ 00:31:03.097 { 00:31:03.097 "pci_address": "0000:00:11.0", 00:31:03.097 "trid": { 00:31:03.097 "trtype": "PCIe", 00:31:03.097 "traddr": "0000:00:11.0" 00:31:03.097 }, 00:31:03.097 "ctrlr_data": { 00:31:03.097 "cntlid": 0, 00:31:03.097 "vendor_id": "0x1b36", 00:31:03.097 "model_number": "QEMU NVMe Ctrl", 00:31:03.097 "serial_number": "12341", 00:31:03.097 "firmware_revision": "8.0.0", 00:31:03.097 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:03.097 "oacs": { 00:31:03.097 "security": 0, 00:31:03.097 "format": 1, 00:31:03.097 "firmware": 0, 00:31:03.097 "ns_manage": 1 00:31:03.097 }, 00:31:03.097 "multi_ctrlr": false, 00:31:03.097 "ana_reporting": false 00:31:03.097 }, 00:31:03.097 "vs": { 00:31:03.097 "nvme_version": "1.4" 00:31:03.097 }, 00:31:03.097 "ns_data": { 00:31:03.097 "id": 1, 00:31:03.097 "can_share": false 00:31:03.097 } 00:31:03.097 } 00:31:03.097 ], 00:31:03.097 "mp_policy": "active_passive" 00:31:03.097 } 00:31:03.097 } 00:31:03.097 ]' 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:03.098 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:03.358 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=0b8fc061-2755-47d0-bc50-ca01517cbbb7 00:31:03.358 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:03.358 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b8fc061-2755-47d0-bc50-ca01517cbbb7 00:31:03.617 17:47:36 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:03.877 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=53a36342-1330-4801-90e9-67b5437aa65e 00:31:03.877 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 53a36342-1330-4801-90e9-67b5437aa65e 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:04.137 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.397 { 00:31:04.397 "name": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:04.397 "aliases": [ 00:31:04.397 "lvs/nvme0n1p0" 00:31:04.397 ], 00:31:04.397 "product_name": "Logical Volume", 00:31:04.397 "block_size": 4096, 00:31:04.397 "num_blocks": 26476544, 00:31:04.397 "uuid": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:04.397 "assigned_rate_limits": { 00:31:04.397 "rw_ios_per_sec": 0, 00:31:04.397 "rw_mbytes_per_sec": 0, 00:31:04.397 "r_mbytes_per_sec": 0, 00:31:04.397 "w_mbytes_per_sec": 0 00:31:04.397 }, 00:31:04.397 "claimed": false, 00:31:04.397 "zoned": false, 00:31:04.397 "supported_io_types": { 00:31:04.397 "read": true, 00:31:04.397 "write": true, 00:31:04.397 "unmap": true, 00:31:04.397 "flush": false, 00:31:04.397 "reset": true, 00:31:04.397 "nvme_admin": false, 00:31:04.397 "nvme_io": false, 00:31:04.397 "nvme_io_md": false, 00:31:04.397 "write_zeroes": true, 00:31:04.397 "zcopy": false, 00:31:04.397 "get_zone_info": false, 00:31:04.397 "zone_management": false, 00:31:04.397 "zone_append": false, 00:31:04.397 "compare": false, 00:31:04.397 "compare_and_write": false, 00:31:04.397 "abort": false, 00:31:04.397 "seek_hole": true, 00:31:04.397 "seek_data": true, 00:31:04.397 "copy": false, 00:31:04.397 "nvme_iov_md": false 00:31:04.397 }, 00:31:04.397 "driver_specific": { 00:31:04.397 "lvol": { 00:31:04.397 "lvol_store_uuid": "53a36342-1330-4801-90e9-67b5437aa65e", 00:31:04.397 "base_bdev": "nvme0n1", 00:31:04.397 "thin_provision": true, 00:31:04.397 "num_allocated_clusters": 0, 00:31:04.397 "snapshot": false, 00:31:04.397 "clone": false, 00:31:04.397 "esnap_clone": false 00:31:04.397 } 00:31:04.397 } 00:31:04.397 } 00:31:04.397 ]' 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:04.397 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:04.657 17:47:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.917 { 00:31:04.917 "name": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:04.917 "aliases": [ 00:31:04.917 "lvs/nvme0n1p0" 00:31:04.917 ], 00:31:04.917 "product_name": "Logical Volume", 00:31:04.917 "block_size": 4096, 00:31:04.917 "num_blocks": 26476544, 00:31:04.917 "uuid": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:04.917 "assigned_rate_limits": { 00:31:04.917 "rw_ios_per_sec": 0, 00:31:04.917 "rw_mbytes_per_sec": 0, 00:31:04.917 "r_mbytes_per_sec": 0, 00:31:04.917 "w_mbytes_per_sec": 0 00:31:04.917 }, 00:31:04.917 "claimed": false, 00:31:04.917 "zoned": false, 00:31:04.917 "supported_io_types": { 00:31:04.917 "read": true, 00:31:04.917 "write": true, 00:31:04.917 "unmap": true, 00:31:04.917 "flush": false, 00:31:04.917 "reset": true, 00:31:04.917 "nvme_admin": false, 00:31:04.917 "nvme_io": false, 00:31:04.917 "nvme_io_md": false, 00:31:04.917 "write_zeroes": true, 00:31:04.917 "zcopy": false, 00:31:04.917 "get_zone_info": false, 00:31:04.917 "zone_management": false, 00:31:04.917 "zone_append": false, 00:31:04.917 "compare": false, 00:31:04.917 "compare_and_write": false, 00:31:04.917 "abort": false, 00:31:04.917 "seek_hole": true, 00:31:04.917 "seek_data": true, 00:31:04.917 "copy": false, 00:31:04.917 "nvme_iov_md": false 00:31:04.917 }, 00:31:04.917 "driver_specific": { 00:31:04.917 "lvol": { 00:31:04.917 "lvol_store_uuid": "53a36342-1330-4801-90e9-67b5437aa65e", 00:31:04.917 "base_bdev": "nvme0n1", 00:31:04.917 "thin_provision": true, 00:31:04.917 "num_allocated_clusters": 0, 00:31:04.917 "snapshot": false, 00:31:04.917 "clone": false, 00:31:04.917 "esnap_clone": false 00:31:04.917 } 00:31:04.917 } 00:31:04.917 } 00:31:04.917 ]' 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:04.917 17:47:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b9de4a63-9fed-437c-98a7-50749e047a1b 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:05.177 { 00:31:05.177 "name": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:05.177 "aliases": [ 00:31:05.177 "lvs/nvme0n1p0" 00:31:05.177 ], 00:31:05.177 "product_name": "Logical Volume", 00:31:05.177 "block_size": 4096, 00:31:05.177 "num_blocks": 26476544, 00:31:05.177 "uuid": "b9de4a63-9fed-437c-98a7-50749e047a1b", 00:31:05.177 "assigned_rate_limits": { 00:31:05.177 "rw_ios_per_sec": 0, 00:31:05.177 "rw_mbytes_per_sec": 0, 00:31:05.177 "r_mbytes_per_sec": 0, 00:31:05.177 "w_mbytes_per_sec": 0 00:31:05.177 }, 00:31:05.177 "claimed": false, 00:31:05.177 "zoned": false, 00:31:05.177 "supported_io_types": { 00:31:05.177 "read": true, 00:31:05.177 "write": true, 00:31:05.177 "unmap": true, 00:31:05.177 "flush": false, 00:31:05.177 "reset": true, 00:31:05.177 "nvme_admin": false, 00:31:05.177 "nvme_io": false, 00:31:05.177 "nvme_io_md": false, 00:31:05.177 "write_zeroes": true, 00:31:05.177 "zcopy": false, 00:31:05.177 "get_zone_info": false, 00:31:05.177 "zone_management": false, 00:31:05.177 "zone_append": false, 00:31:05.177 "compare": false, 00:31:05.177 "compare_and_write": false, 00:31:05.177 "abort": false, 00:31:05.177 "seek_hole": true, 00:31:05.177 "seek_data": true, 00:31:05.177 "copy": false, 00:31:05.177 "nvme_iov_md": false 00:31:05.177 }, 00:31:05.177 "driver_specific": { 00:31:05.177 "lvol": { 00:31:05.177 "lvol_store_uuid": "53a36342-1330-4801-90e9-67b5437aa65e", 00:31:05.177 "base_bdev": "nvme0n1", 00:31:05.177 "thin_provision": true, 00:31:05.177 "num_allocated_clusters": 0, 00:31:05.177 "snapshot": false, 00:31:05.177 "clone": false, 00:31:05.177 "esnap_clone": false 00:31:05.177 } 00:31:05.177 } 00:31:05.177 } 00:31:05.177 ]' 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:05.177 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b9de4a63-9fed-437c-98a7-50749e047a1b --l2p_dram_limit 10' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:05.444 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:05.445 17:47:38 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b9de4a63-9fed-437c-98a7-50749e047a1b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:05.445 [2024-12-07 17:47:38.749052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.749095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:05.445 [2024-12-07 17:47:38.749108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:05.445 [2024-12-07 17:47:38.749115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.749156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.749164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.445 [2024-12-07 17:47:38.749172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:05.445 [2024-12-07 17:47:38.749178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.749197] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:05.445 [2024-12-07 17:47:38.749771] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:05.445 [2024-12-07 17:47:38.749793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.749799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.445 [2024-12-07 17:47:38.749809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:31:05.445 [2024-12-07 17:47:38.749815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.749837] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:31:05.445 [2024-12-07 17:47:38.751090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.751121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:05.445 [2024-12-07 17:47:38.751130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:05.445 [2024-12-07 17:47:38.751141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.757994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.758021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.445 [2024-12-07 17:47:38.758029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:31:05.445 [2024-12-07 17:47:38.758037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.758137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.758146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.445 [2024-12-07 17:47:38.758153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:05.445 [2024-12-07 17:47:38.758164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.758197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.758206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:05.445 [2024-12-07 17:47:38.758214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:05.445 [2024-12-07 17:47:38.758221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.758236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:05.445 [2024-12-07 17:47:38.761469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.761493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.445 [2024-12-07 17:47:38.761503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:31:05.445 [2024-12-07 17:47:38.761509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.761540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.761547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:05.445 [2024-12-07 17:47:38.761554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:05.445 [2024-12-07 17:47:38.761560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.761575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:05.445 [2024-12-07 17:47:38.761686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:05.445 [2024-12-07 17:47:38.761699] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:05.445 [2024-12-07 17:47:38.761708] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:05.445 [2024-12-07 17:47:38.761718] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:05.445 [2024-12-07 17:47:38.761724] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:05.445 [2024-12-07 17:47:38.761732] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:05.445 [2024-12-07 17:47:38.761737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:05.445 [2024-12-07 17:47:38.761747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:05.445 [2024-12-07 17:47:38.761754] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:05.445 [2024-12-07 17:47:38.761761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.761772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:05.445 [2024-12-07 17:47:38.761779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:31:05.445 [2024-12-07 17:47:38.761785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.761852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.761858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:05.445 [2024-12-07 17:47:38.761866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:05.445 [2024-12-07 17:47:38.761873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.761952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:05.445 [2024-12-07 17:47:38.761960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:05.445 [2024-12-07 17:47:38.761968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.445 [2024-12-07 17:47:38.761974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.761995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:05.445 [2024-12-07 17:47:38.762001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:05.445 [2024-12-07 17:47:38.762022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.445 [2024-12-07 17:47:38.762035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:05.445 [2024-12-07 17:47:38.762041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:05.445 [2024-12-07 17:47:38.762048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.445 [2024-12-07 17:47:38.762055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:05.445 [2024-12-07 17:47:38.762063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:05.445 [2024-12-07 17:47:38.762068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:05.445 [2024-12-07 17:47:38.762082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:05.445 [2024-12-07 17:47:38.762102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:05.445 [2024-12-07 17:47:38.762119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:05.445 [2024-12-07 17:47:38.762139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:05.445 [2024-12-07 17:47:38.762156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:05.445 [2024-12-07 17:47:38.762177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.445 [2024-12-07 17:47:38.762189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:05.445 [2024-12-07 17:47:38.762194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:05.445 [2024-12-07 17:47:38.762200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.445 [2024-12-07 17:47:38.762205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:05.445 [2024-12-07 17:47:38.762211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:05.445 [2024-12-07 17:47:38.762216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:05.445 [2024-12-07 17:47:38.762228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:05.445 [2024-12-07 17:47:38.762235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762239] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:05.445 [2024-12-07 17:47:38.762247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:05.445 [2024-12-07 17:47:38.762255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.445 [2024-12-07 17:47:38.762269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:05.445 [2024-12-07 17:47:38.762278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:05.445 [2024-12-07 17:47:38.762284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:05.445 [2024-12-07 17:47:38.762291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:05.445 [2024-12-07 17:47:38.762303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:05.445 [2024-12-07 17:47:38.762310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:05.445 [2024-12-07 17:47:38.762316] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:05.445 [2024-12-07 17:47:38.762327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:05.445 [2024-12-07 17:47:38.762341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:05.445 [2024-12-07 17:47:38.762346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:05.445 [2024-12-07 17:47:38.762353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:05.445 [2024-12-07 17:47:38.762359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:05.445 [2024-12-07 17:47:38.762367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:05.445 [2024-12-07 17:47:38.762372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:05.445 [2024-12-07 17:47:38.762378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:05.445 [2024-12-07 17:47:38.762384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:05.445 [2024-12-07 17:47:38.762392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:05.445 [2024-12-07 17:47:38.762425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:05.445 [2024-12-07 17:47:38.762433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:05.445 [2024-12-07 17:47:38.762446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:05.445 [2024-12-07 17:47:38.762453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:05.445 [2024-12-07 17:47:38.762459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:05.445 [2024-12-07 17:47:38.762465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.445 [2024-12-07 17:47:38.762472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:05.445 [2024-12-07 17:47:38.762479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:31:05.445 [2024-12-07 17:47:38.762486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.445 [2024-12-07 17:47:38.762527] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:05.445 [2024-12-07 17:47:38.762540] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:09.708 [2024-12-07 17:47:42.834925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.835066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:09.708 [2024-12-07 17:47:42.835089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4072.378 ms 00:31:09.708 [2024-12-07 17:47:42.835103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.872890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.872968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.708 [2024-12-07 17:47:42.873004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.514 ms 00:31:09.708 [2024-12-07 17:47:42.873018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.873189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.873207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:09.708 [2024-12-07 17:47:42.873218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:09.708 [2024-12-07 17:47:42.873237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.913642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.913700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.708 [2024-12-07 17:47:42.913714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.367 ms 00:31:09.708 [2024-12-07 17:47:42.913726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.913770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.913788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.708 [2024-12-07 17:47:42.913799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:09.708 [2024-12-07 17:47:42.913820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.914606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.914660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.708 [2024-12-07 17:47:42.914674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:31:09.708 [2024-12-07 17:47:42.914685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.914812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.914827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.708 [2024-12-07 17:47:42.914840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:09.708 [2024-12-07 17:47:42.914855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.935602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.935657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.708 [2024-12-07 17:47:42.935669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.723 ms 00:31:09.708 [2024-12-07 17:47:42.935680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:42.962357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:09.708 [2024-12-07 17:47:42.967512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:42.967563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:09.708 [2024-12-07 17:47:42.967580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.735 ms 00:31:09.708 [2024-12-07 17:47:42.967590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:43.076969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:43.077033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:09.708 [2024-12-07 17:47:43.077053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.325 ms 00:31:09.708 [2024-12-07 17:47:43.077063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.708 [2024-12-07 17:47:43.077290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.708 [2024-12-07 17:47:43.077308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:09.708 [2024-12-07 17:47:43.077324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:31:09.708 [2024-12-07 17:47:43.077333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.103803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.104147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:09.970 [2024-12-07 17:47:43.104178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.381 ms 00:31:09.970 [2024-12-07 17:47:43.104189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.130015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.130063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:09.970 [2024-12-07 17:47:43.130081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.713 ms 00:31:09.970 [2024-12-07 17:47:43.130089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.130737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.130760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:09.970 [2024-12-07 17:47:43.130773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:31:09.970 [2024-12-07 17:47:43.130785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.223247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.223471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:09.970 [2024-12-07 17:47:43.223506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.412 ms 00:31:09.970 [2024-12-07 17:47:43.223516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.252490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.252541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:09.970 [2024-12-07 17:47:43.252558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.794 ms 00:31:09.970 [2024-12-07 17:47:43.252568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.278737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.278951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:09.970 [2024-12-07 17:47:43.279000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.113 ms 00:31:09.970 [2024-12-07 17:47:43.279011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.305900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.306121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:09.970 [2024-12-07 17:47:43.306152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.837 ms 00:31:09.970 [2024-12-07 17:47:43.306162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.306215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.306227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:09.970 [2024-12-07 17:47:43.306244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:09.970 [2024-12-07 17:47:43.306253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.306371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.970 [2024-12-07 17:47:43.306387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:09.970 [2024-12-07 17:47:43.306399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:09.970 [2024-12-07 17:47:43.306408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.970 [2024-12-07 17:47:43.307837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4558.171 ms, result 0 00:31:09.970 { 00:31:09.970 "name": "ftl0", 00:31:09.970 "uuid": "58d6a6c1-aac4-4bca-8413-1635a6a457e5" 00:31:09.970 } 00:31:09.970 17:47:43 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:09.970 17:47:43 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:10.231 17:47:43 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:10.231 17:47:43 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:10.496 [2024-12-07 17:47:43.750716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.750756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:10.496 [2024-12-07 17:47:43.750765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:10.496 [2024-12-07 17:47:43.750774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.750792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:10.496 [2024-12-07 17:47:43.753079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.753195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:10.496 [2024-12-07 17:47:43.753212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:31:10.496 [2024-12-07 17:47:43.753219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.753426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.753438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:10.496 [2024-12-07 17:47:43.753446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:31:10.496 [2024-12-07 17:47:43.753452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.755885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.755902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:10.496 [2024-12-07 17:47:43.755912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:31:10.496 [2024-12-07 17:47:43.755919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.760571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.760593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:10.496 [2024-12-07 17:47:43.760605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:31:10.496 [2024-12-07 17:47:43.760611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.778784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.778810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:10.496 [2024-12-07 17:47:43.778820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.120 ms 00:31:10.496 [2024-12-07 17:47:43.778826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.792107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.792132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:10.496 [2024-12-07 17:47:43.792144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.247 ms 00:31:10.496 [2024-12-07 17:47:43.792150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.792265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.792274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:10.496 [2024-12-07 17:47:43.792283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:10.496 [2024-12-07 17:47:43.792289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.810694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.810808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:10.496 [2024-12-07 17:47:43.810825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.387 ms 00:31:10.496 [2024-12-07 17:47:43.810831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.828641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.828666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:10.496 [2024-12-07 17:47:43.828675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.782 ms 00:31:10.496 [2024-12-07 17:47:43.828681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.846476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.846500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:10.496 [2024-12-07 17:47:43.846509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:31:10.496 [2024-12-07 17:47:43.846514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.864214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.496 [2024-12-07 17:47:43.864320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:10.496 [2024-12-07 17:47:43.864336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.641 ms 00:31:10.496 [2024-12-07 17:47:43.864342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.496 [2024-12-07 17:47:43.864368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.496 [2024-12-07 17:47:43.864380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.496 [2024-12-07 17:47:43.864549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.864977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.497 [2024-12-07 17:47:43.865094] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.497 [2024-12-07 17:47:43.865103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:31:10.497 [2024-12-07 17:47:43.865109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:10.497 [2024-12-07 17:47:43.865118] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:10.497 [2024-12-07 17:47:43.865126] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:10.497 [2024-12-07 17:47:43.865134] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:10.497 [2024-12-07 17:47:43.865139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.497 [2024-12-07 17:47:43.865149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.497 [2024-12-07 17:47:43.865155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.497 [2024-12-07 17:47:43.865162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.497 [2024-12-07 17:47:43.865167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.497 [2024-12-07 17:47:43.865173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.497 [2024-12-07 17:47:43.865179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.497 [2024-12-07 17:47:43.865187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:31:10.497 [2024-12-07 17:47:43.865195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.874936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.760 [2024-12-07 17:47:43.874960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.760 [2024-12-07 17:47:43.874971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.714 ms 00:31:10.760 [2024-12-07 17:47:43.874977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.875284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.760 [2024-12-07 17:47:43.875296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.760 [2024-12-07 17:47:43.875307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:31:10.760 [2024-12-07 17:47:43.875312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.910199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:43.910333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.760 [2024-12-07 17:47:43.910349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:43.910356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.910409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:43.910416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.760 [2024-12-07 17:47:43.910426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:43.910432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.910492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:43.910501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.760 [2024-12-07 17:47:43.910510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:43.910515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.910533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:43.910539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.760 [2024-12-07 17:47:43.910547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:43.910554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:43.972601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:43.972637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.760 [2024-12-07 17:47:43.972648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:43.972654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.760 [2024-12-07 17:47:44.023587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.023596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.760 [2024-12-07 17:47:44.023700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.023706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.760 [2024-12-07 17:47:44.023764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.023770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.760 [2024-12-07 17:47:44.023870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.023876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.760 [2024-12-07 17:47:44.023919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.023926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.023965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.023973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.760 [2024-12-07 17:47:44.023995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.024002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.024047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.760 [2024-12-07 17:47:44.024055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.760 [2024-12-07 17:47:44.024064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.760 [2024-12-07 17:47:44.024070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.760 [2024-12-07 17:47:44.024208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 273.437 ms, result 0 00:31:10.760 true 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83991 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83991 ']' 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83991 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83991 00:31:10.760 killing process with pid 83991 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83991' 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83991 00:31:10.760 17:47:44 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83991 00:31:18.898 17:47:51 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:23.095 262144+0 records in 00:31:23.095 262144+0 records out 00:31:23.095 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.7848 s, 284 MB/s 00:31:23.095 17:47:55 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:24.031 17:47:57 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:24.292 [2024-12-07 17:47:57.442474] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:31:24.292 [2024-12-07 17:47:57.444005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84221 ] 00:31:24.292 [2024-12-07 17:47:57.599096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.553 [2024-12-07 17:47:57.695496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.815 [2024-12-07 17:47:57.959124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:24.815 [2024-12-07 17:47:57.959406] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:24.815 [2024-12-07 17:47:58.120142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.120303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:24.815 [2024-12-07 17:47:58.120373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:24.815 [2024-12-07 17:47:58.120397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.120464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.120492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:24.815 [2024-12-07 17:47:58.120512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:24.815 [2024-12-07 17:47:58.120530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.120561] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:24.815 [2024-12-07 17:47:58.121353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:24.815 [2024-12-07 17:47:58.121472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.121525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:24.815 [2024-12-07 17:47:58.121549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:31:24.815 [2024-12-07 17:47:58.121588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.122660] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:24.815 [2024-12-07 17:47:58.135527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.135644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:24.815 [2024-12-07 17:47:58.135697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.868 ms 00:31:24.815 [2024-12-07 17:47:58.135718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.135782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.135806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:24.815 [2024-12-07 17:47:58.135825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:24.815 [2024-12-07 17:47:58.135843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.140777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.140879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:24.815 [2024-12-07 17:47:58.140928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:31:24.815 [2024-12-07 17:47:58.140954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.141041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.141064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:24.815 [2024-12-07 17:47:58.141083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:24.815 [2024-12-07 17:47:58.141138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.141204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.141644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:24.815 [2024-12-07 17:47:58.141668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:24.815 [2024-12-07 17:47:58.141678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.141724] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:24.815 [2024-12-07 17:47:58.144930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.144958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:24.815 [2024-12-07 17:47:58.144970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:31:24.815 [2024-12-07 17:47:58.144977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.145020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.145029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:24.815 [2024-12-07 17:47:58.145037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:24.815 [2024-12-07 17:47:58.145044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.145078] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:24.815 [2024-12-07 17:47:58.145099] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:24.815 [2024-12-07 17:47:58.145133] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:24.815 [2024-12-07 17:47:58.145149] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:24.815 [2024-12-07 17:47:58.145251] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:24.815 [2024-12-07 17:47:58.145261] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:24.815 [2024-12-07 17:47:58.145271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:24.815 [2024-12-07 17:47:58.145280] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:24.815 [2024-12-07 17:47:58.145289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:24.815 [2024-12-07 17:47:58.145296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:24.815 [2024-12-07 17:47:58.145303] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:24.815 [2024-12-07 17:47:58.145312] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:24.815 [2024-12-07 17:47:58.145319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:24.815 [2024-12-07 17:47:58.145326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.145334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:24.815 [2024-12-07 17:47:58.145362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:31:24.815 [2024-12-07 17:47:58.145369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.815 [2024-12-07 17:47:58.145452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.815 [2024-12-07 17:47:58.145460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:24.815 [2024-12-07 17:47:58.145468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:24.816 [2024-12-07 17:47:58.145474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.816 [2024-12-07 17:47:58.145578] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:24.816 [2024-12-07 17:47:58.145588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:24.816 [2024-12-07 17:47:58.145596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:24.816 [2024-12-07 17:47:58.145617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:24.816 [2024-12-07 17:47:58.145638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:24.816 [2024-12-07 17:47:58.145652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:24.816 [2024-12-07 17:47:58.145658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:24.816 [2024-12-07 17:47:58.145665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:24.816 [2024-12-07 17:47:58.145676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:24.816 [2024-12-07 17:47:58.145686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:24.816 [2024-12-07 17:47:58.145692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:24.816 [2024-12-07 17:47:58.145706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:24.816 [2024-12-07 17:47:58.145726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:24.816 [2024-12-07 17:47:58.145745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:24.816 [2024-12-07 17:47:58.145764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:24.816 [2024-12-07 17:47:58.145783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:24.816 [2024-12-07 17:47:58.145802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:24.816 [2024-12-07 17:47:58.145815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:24.816 [2024-12-07 17:47:58.145821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:24.816 [2024-12-07 17:47:58.145827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:24.816 [2024-12-07 17:47:58.145834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:24.816 [2024-12-07 17:47:58.145840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:24.816 [2024-12-07 17:47:58.145846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:24.816 [2024-12-07 17:47:58.145859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:24.816 [2024-12-07 17:47:58.145866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145872] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:24.816 [2024-12-07 17:47:58.145879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:24.816 [2024-12-07 17:47:58.145886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:24.816 [2024-12-07 17:47:58.145902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:24.816 [2024-12-07 17:47:58.145909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:24.816 [2024-12-07 17:47:58.145916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:24.816 [2024-12-07 17:47:58.145922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:24.816 [2024-12-07 17:47:58.145928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:24.816 [2024-12-07 17:47:58.145935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:24.816 [2024-12-07 17:47:58.145944] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:24.816 [2024-12-07 17:47:58.145952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.145963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:24.816 [2024-12-07 17:47:58.145970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:24.816 [2024-12-07 17:47:58.145976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:24.816 [2024-12-07 17:47:58.145994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:24.816 [2024-12-07 17:47:58.146002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:24.816 [2024-12-07 17:47:58.146008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:24.816 [2024-12-07 17:47:58.146015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:24.816 [2024-12-07 17:47:58.146022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:24.816 [2024-12-07 17:47:58.146029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:24.816 [2024-12-07 17:47:58.146036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:24.816 [2024-12-07 17:47:58.146085] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:24.816 [2024-12-07 17:47:58.146093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:24.816 [2024-12-07 17:47:58.146108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:24.816 [2024-12-07 17:47:58.146120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:24.816 [2024-12-07 17:47:58.146127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:24.816 [2024-12-07 17:47:58.146134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.816 [2024-12-07 17:47:58.146142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:24.816 [2024-12-07 17:47:58.146149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:31:24.816 [2024-12-07 17:47:58.146156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.816 [2024-12-07 17:47:58.171999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.816 [2024-12-07 17:47:58.172030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:24.816 [2024-12-07 17:47:58.172040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:31:24.816 [2024-12-07 17:47:58.172050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:24.816 [2024-12-07 17:47:58.172132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:24.816 [2024-12-07 17:47:58.172141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:24.816 [2024-12-07 17:47:58.172149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:24.816 [2024-12-07 17:47:58.172156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.078 [2024-12-07 17:47:58.210859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.210897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:25.079 [2024-12-07 17:47:58.210908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.648 ms 00:31:25.079 [2024-12-07 17:47:58.210916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.210955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.210965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:25.079 [2024-12-07 17:47:58.210976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:25.079 [2024-12-07 17:47:58.210997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.211347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.211366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:25.079 [2024-12-07 17:47:58.211375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:31:25.079 [2024-12-07 17:47:58.211383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.211504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.211520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:25.079 [2024-12-07 17:47:58.211528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:25.079 [2024-12-07 17:47:58.211540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.224678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.224709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:25.079 [2024-12-07 17:47:58.224721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.119 ms 00:31:25.079 [2024-12-07 17:47:58.224728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.237667] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:25.079 [2024-12-07 17:47:58.237798] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:25.079 [2024-12-07 17:47:58.237813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.237822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:25.079 [2024-12-07 17:47:58.237830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.982 ms 00:31:25.079 [2024-12-07 17:47:58.237837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.273654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.273716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:25.079 [2024-12-07 17:47:58.273729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.782 ms 00:31:25.079 [2024-12-07 17:47:58.273737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.285875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.285913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:25.079 [2024-12-07 17:47:58.285924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.082 ms 00:31:25.079 [2024-12-07 17:47:58.285931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.297962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.298010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:25.079 [2024-12-07 17:47:58.298021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.992 ms 00:31:25.079 [2024-12-07 17:47:58.298028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.298641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.298669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:25.079 [2024-12-07 17:47:58.298679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:31:25.079 [2024-12-07 17:47:58.298689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.358478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.358520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:25.079 [2024-12-07 17:47:58.358533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.772 ms 00:31:25.079 [2024-12-07 17:47:58.358545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.369217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:25.079 [2024-12-07 17:47:58.371714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.371746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:25.079 [2024-12-07 17:47:58.371757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.130 ms 00:31:25.079 [2024-12-07 17:47:58.371764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.371845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.371855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:25.079 [2024-12-07 17:47:58.371864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:25.079 [2024-12-07 17:47:58.371872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.371937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.371948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:25.079 [2024-12-07 17:47:58.371956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:25.079 [2024-12-07 17:47:58.371963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.372000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.372009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:25.079 [2024-12-07 17:47:58.372017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:25.079 [2024-12-07 17:47:58.372025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.372054] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:25.079 [2024-12-07 17:47:58.372066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.372073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:25.079 [2024-12-07 17:47:58.372081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:25.079 [2024-12-07 17:47:58.372089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.395944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.396089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:25.079 [2024-12-07 17:47:58.396108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.839 ms 00:31:25.079 [2024-12-07 17:47:58.396122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.396190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:25.079 [2024-12-07 17:47:58.396199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:25.079 [2024-12-07 17:47:58.396208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:25.079 [2024-12-07 17:47:58.396215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:25.079 [2024-12-07 17:47:58.397131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 276.566 ms, result 0 00:31:26.462  [2024-12-07T17:48:00.416Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-07T17:48:01.795Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-07T17:48:02.735Z] Copying: 59/1024 [MB] (26 MBps) [2024-12-07T17:48:03.677Z] Copying: 77/1024 [MB] (17 MBps) [2024-12-07T17:48:04.615Z] Copying: 95/1024 [MB] (18 MBps) [2024-12-07T17:48:05.557Z] Copying: 112/1024 [MB] (16 MBps) [2024-12-07T17:48:06.497Z] Copying: 136/1024 [MB] (23 MBps) [2024-12-07T17:48:07.438Z] Copying: 162/1024 [MB] (26 MBps) [2024-12-07T17:48:08.824Z] Copying: 184/1024 [MB] (21 MBps) [2024-12-07T17:48:09.763Z] Copying: 206/1024 [MB] (21 MBps) [2024-12-07T17:48:10.702Z] Copying: 228/1024 [MB] (22 MBps) [2024-12-07T17:48:11.646Z] Copying: 253/1024 [MB] (24 MBps) [2024-12-07T17:48:12.585Z] Copying: 277/1024 [MB] (24 MBps) [2024-12-07T17:48:13.524Z] Copying: 294/1024 [MB] (16 MBps) [2024-12-07T17:48:14.483Z] Copying: 307/1024 [MB] (13 MBps) [2024-12-07T17:48:15.490Z] Copying: 318/1024 [MB] (11 MBps) [2024-12-07T17:48:16.435Z] Copying: 335/1024 [MB] (17 MBps) [2024-12-07T17:48:17.821Z] Copying: 352/1024 [MB] (17 MBps) [2024-12-07T17:48:18.766Z] Copying: 375/1024 [MB] (22 MBps) [2024-12-07T17:48:19.711Z] Copying: 390/1024 [MB] (14 MBps) [2024-12-07T17:48:20.657Z] Copying: 410088/1048576 [kB] (10144 kBps) [2024-12-07T17:48:21.601Z] Copying: 413/1024 [MB] (13 MBps) [2024-12-07T17:48:22.545Z] Copying: 430/1024 [MB] (17 MBps) [2024-12-07T17:48:23.489Z] Copying: 443/1024 [MB] (13 MBps) [2024-12-07T17:48:24.434Z] Copying: 456/1024 [MB] (12 MBps) [2024-12-07T17:48:25.821Z] Copying: 471/1024 [MB] (14 MBps) [2024-12-07T17:48:26.765Z] Copying: 483/1024 [MB] (12 MBps) [2024-12-07T17:48:27.708Z] Copying: 499/1024 [MB] (16 MBps) [2024-12-07T17:48:28.654Z] Copying: 516/1024 [MB] (16 MBps) [2024-12-07T17:48:29.599Z] Copying: 532/1024 [MB] (16 MBps) [2024-12-07T17:48:30.543Z] Copying: 548/1024 [MB] (15 MBps) [2024-12-07T17:48:31.489Z] Copying: 564/1024 [MB] (15 MBps) [2024-12-07T17:48:32.432Z] Copying: 579/1024 [MB] (15 MBps) [2024-12-07T17:48:33.823Z] Copying: 597/1024 [MB] (17 MBps) [2024-12-07T17:48:34.768Z] Copying: 615/1024 [MB] (18 MBps) [2024-12-07T17:48:35.712Z] Copying: 634/1024 [MB] (18 MBps) [2024-12-07T17:48:36.656Z] Copying: 650/1024 [MB] (16 MBps) [2024-12-07T17:48:37.597Z] Copying: 665/1024 [MB] (15 MBps) [2024-12-07T17:48:38.539Z] Copying: 681/1024 [MB] (15 MBps) [2024-12-07T17:48:39.483Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-07T17:48:40.427Z] Copying: 708/1024 [MB] (15 MBps) [2024-12-07T17:48:41.814Z] Copying: 733/1024 [MB] (24 MBps) [2024-12-07T17:48:42.758Z] Copying: 749/1024 [MB] (16 MBps) [2024-12-07T17:48:43.705Z] Copying: 765/1024 [MB] (16 MBps) [2024-12-07T17:48:44.728Z] Copying: 779/1024 [MB] (14 MBps) [2024-12-07T17:48:45.673Z] Copying: 792/1024 [MB] (12 MBps) [2024-12-07T17:48:46.618Z] Copying: 808/1024 [MB] (16 MBps) [2024-12-07T17:48:47.648Z] Copying: 830/1024 [MB] (22 MBps) [2024-12-07T17:48:48.589Z] Copying: 849/1024 [MB] (19 MBps) [2024-12-07T17:48:49.533Z] Copying: 866/1024 [MB] (16 MBps) [2024-12-07T17:48:50.477Z] Copying: 880/1024 [MB] (14 MBps) [2024-12-07T17:48:51.421Z] Copying: 893/1024 [MB] (13 MBps) [2024-12-07T17:48:52.807Z] Copying: 910/1024 [MB] (16 MBps) [2024-12-07T17:48:53.751Z] Copying: 926/1024 [MB] (16 MBps) [2024-12-07T17:48:54.690Z] Copying: 944/1024 [MB] (17 MBps) [2024-12-07T17:48:55.633Z] Copying: 960/1024 [MB] (16 MBps) [2024-12-07T17:48:56.576Z] Copying: 976/1024 [MB] (16 MBps) [2024-12-07T17:48:57.566Z] Copying: 991/1024 [MB] (15 MBps) [2024-12-07T17:48:58.510Z] Copying: 1007/1024 [MB] (15 MBps) [2024-12-07T17:48:58.510Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-07 17:48:58.272821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.128 [2024-12-07 17:48:58.272961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:25.128 [2024-12-07 17:48:58.273005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:25.128 [2024-12-07 17:48:58.273023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.128 [2024-12-07 17:48:58.273052] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:25.128 [2024-12-07 17:48:58.275380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.128 [2024-12-07 17:48:58.275471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:25.128 [2024-12-07 17:48:58.275525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:32:25.128 [2024-12-07 17:48:58.275534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.128 [2024-12-07 17:48:58.277948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.128 [2024-12-07 17:48:58.277977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:25.128 [2024-12-07 17:48:58.277993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.395 ms 00:32:25.128 [2024-12-07 17:48:58.278000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.128 [2024-12-07 17:48:58.278021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.128 [2024-12-07 17:48:58.278028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:25.128 [2024-12-07 17:48:58.278040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:25.128 [2024-12-07 17:48:58.278047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.128 [2024-12-07 17:48:58.278091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.128 [2024-12-07 17:48:58.278100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:25.128 [2024-12-07 17:48:58.278107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:25.128 [2024-12-07 17:48:58.278113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.128 [2024-12-07 17:48:58.278123] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:25.128 [2024-12-07 17:48:58.278133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:25.129 [2024-12-07 17:48:58.278542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:25.130 [2024-12-07 17:48:58.278728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:25.130 [2024-12-07 17:48:58.278734] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:32:25.130 [2024-12-07 17:48:58.278741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:25.130 [2024-12-07 17:48:58.278746] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:25.130 [2024-12-07 17:48:58.278752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:25.130 [2024-12-07 17:48:58.278760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:25.130 [2024-12-07 17:48:58.278766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:25.130 [2024-12-07 17:48:58.278772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:25.130 [2024-12-07 17:48:58.278778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:25.130 [2024-12-07 17:48:58.278782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:25.130 [2024-12-07 17:48:58.278787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:25.130 [2024-12-07 17:48:58.278792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.130 [2024-12-07 17:48:58.278798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:25.130 [2024-12-07 17:48:58.278805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:32:25.130 [2024-12-07 17:48:58.278810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.288941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.130 [2024-12-07 17:48:58.288969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:25.130 [2024-12-07 17:48:58.288978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.120 ms 00:32:25.130 [2024-12-07 17:48:58.289009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.289297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.130 [2024-12-07 17:48:58.289309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:25.130 [2024-12-07 17:48:58.289316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:25.130 [2024-12-07 17:48:58.289321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.316604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.316630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:25.130 [2024-12-07 17:48:58.316637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.130 [2024-12-07 17:48:58.316643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.316689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.316695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:25.130 [2024-12-07 17:48:58.316701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.130 [2024-12-07 17:48:58.316706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.316744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.316754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:25.130 [2024-12-07 17:48:58.316760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.130 [2024-12-07 17:48:58.316766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.316778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.316784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:25.130 [2024-12-07 17:48:58.316793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.130 [2024-12-07 17:48:58.316799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.379651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.379687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:25.130 [2024-12-07 17:48:58.379696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.130 [2024-12-07 17:48:58.379702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.130 [2024-12-07 17:48:58.430727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.130 [2024-12-07 17:48:58.430891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:25.130 [2024-12-07 17:48:58.430904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.430911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.430996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:25.131 [2024-12-07 17:48:58.431016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:25.131 [2024-12-07 17:48:58.431069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:25.131 [2024-12-07 17:48:58.431158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:25.131 [2024-12-07 17:48:58.431202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:25.131 [2024-12-07 17:48:58.431256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:25.131 [2024-12-07 17:48:58.431311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:25.131 [2024-12-07 17:48:58.431318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:25.131 [2024-12-07 17:48:58.431323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.131 [2024-12-07 17:48:58.431433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.581 ms, result 0 00:32:26.517 00:32:26.517 00:32:26.517 17:48:59 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:26.517 [2024-12-07 17:48:59.637747] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:32:26.517 [2024-12-07 17:48:59.638059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84860 ] 00:32:26.517 [2024-12-07 17:48:59.794314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.517 [2024-12-07 17:48:59.884321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.778 [2024-12-07 17:49:00.121664] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:26.778 [2024-12-07 17:49:00.121884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:27.041 [2024-12-07 17:49:00.277663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.277785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:27.041 [2024-12-07 17:49:00.277839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:27.041 [2024-12-07 17:49:00.277859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.277914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.277936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:27.041 [2024-12-07 17:49:00.277952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:27.041 [2024-12-07 17:49:00.277968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.278005] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:27.041 [2024-12-07 17:49:00.278575] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:27.041 [2024-12-07 17:49:00.278662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.278703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:27.041 [2024-12-07 17:49:00.278722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:32:27.041 [2024-12-07 17:49:00.278737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.279187] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:27.041 [2024-12-07 17:49:00.279245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.279324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:27.041 [2024-12-07 17:49:00.279403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:27.041 [2024-12-07 17:49:00.279423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.279477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.279495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:27.041 [2024-12-07 17:49:00.279511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:27.041 [2024-12-07 17:49:00.279525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.279746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.279907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:27.041 [2024-12-07 17:49:00.279927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:32:27.041 [2024-12-07 17:49:00.279943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.280023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.280043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:27.041 [2024-12-07 17:49:00.280102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:27.041 [2024-12-07 17:49:00.280121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.280150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.280168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:27.041 [2024-12-07 17:49:00.280186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:27.041 [2024-12-07 17:49:00.280200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.280258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:27.041 [2024-12-07 17:49:00.283498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.283589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:27.041 [2024-12-07 17:49:00.283783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:32:27.041 [2024-12-07 17:49:00.283882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.284054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.284352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:27.041 [2024-12-07 17:49:00.284402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:27.041 [2024-12-07 17:49:00.284445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.284657] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:27.041 [2024-12-07 17:49:00.284748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:27.041 [2024-12-07 17:49:00.284882] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:27.041 [2024-12-07 17:49:00.285082] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:27.041 [2024-12-07 17:49:00.285400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:27.041 [2024-12-07 17:49:00.285492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:27.041 [2024-12-07 17:49:00.285623] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:27.041 [2024-12-07 17:49:00.285736] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:27.041 [2024-12-07 17:49:00.286022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:27.041 [2024-12-07 17:49:00.286189] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:27.041 [2024-12-07 17:49:00.286237] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:27.041 [2024-12-07 17:49:00.286314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:27.041 [2024-12-07 17:49:00.286402] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:27.041 [2024-12-07 17:49:00.286527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.286582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:27.041 [2024-12-07 17:49:00.286696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:32:27.041 [2024-12-07 17:49:00.286794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.287071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.041 [2024-12-07 17:49:00.287204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:27.041 [2024-12-07 17:49:00.287310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:32:27.041 [2024-12-07 17:49:00.287367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.041 [2024-12-07 17:49:00.287609] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:27.041 [2024-12-07 17:49:00.287634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:27.041 [2024-12-07 17:49:00.287652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:27.041 [2024-12-07 17:49:00.287670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:27.041 [2024-12-07 17:49:00.287703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:27.041 [2024-12-07 17:49:00.287734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:27.041 [2024-12-07 17:49:00.287750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:27.041 [2024-12-07 17:49:00.287781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:27.041 [2024-12-07 17:49:00.287796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:27.041 [2024-12-07 17:49:00.287810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:27.041 [2024-12-07 17:49:00.287825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:27.041 [2024-12-07 17:49:00.287842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:27.041 [2024-12-07 17:49:00.287866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:27.041 [2024-12-07 17:49:00.287897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:27.041 [2024-12-07 17:49:00.287913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:27.041 [2024-12-07 17:49:00.287944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:27.041 [2024-12-07 17:49:00.287958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:27.041 [2024-12-07 17:49:00.287973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:27.041 [2024-12-07 17:49:00.288017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:27.041 [2024-12-07 17:49:00.288032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:27.042 [2024-12-07 17:49:00.288048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:27.042 [2024-12-07 17:49:00.288062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:27.042 [2024-12-07 17:49:00.288093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:27.042 [2024-12-07 17:49:00.288107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:27.042 [2024-12-07 17:49:00.288137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:27.042 [2024-12-07 17:49:00.288153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:27.042 [2024-12-07 17:49:00.288182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:27.042 [2024-12-07 17:49:00.288197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:27.042 [2024-12-07 17:49:00.288214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:27.042 [2024-12-07 17:49:00.288230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:27.042 [2024-12-07 17:49:00.288245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:27.042 [2024-12-07 17:49:00.288263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:27.042 [2024-12-07 17:49:00.288294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:27.042 [2024-12-07 17:49:00.288309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288323] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:27.042 [2024-12-07 17:49:00.288340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:27.042 [2024-12-07 17:49:00.288356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:27.042 [2024-12-07 17:49:00.288371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:27.042 [2024-12-07 17:49:00.288393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:27.042 [2024-12-07 17:49:00.288408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:27.042 [2024-12-07 17:49:00.288423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:27.042 [2024-12-07 17:49:00.288439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:27.042 [2024-12-07 17:49:00.288454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:27.042 [2024-12-07 17:49:00.288469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:27.042 [2024-12-07 17:49:00.288488] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:27.042 [2024-12-07 17:49:00.288508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:27.042 [2024-12-07 17:49:00.288545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:27.042 [2024-12-07 17:49:00.288560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:27.042 [2024-12-07 17:49:00.288577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:27.042 [2024-12-07 17:49:00.288593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:27.042 [2024-12-07 17:49:00.288609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:27.042 [2024-12-07 17:49:00.288625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:27.042 [2024-12-07 17:49:00.288641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:27.042 [2024-12-07 17:49:00.288657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:27.042 [2024-12-07 17:49:00.288673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:27.042 [2024-12-07 17:49:00.288755] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:27.042 [2024-12-07 17:49:00.288772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:27.042 [2024-12-07 17:49:00.288809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:27.042 [2024-12-07 17:49:00.288825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:27.042 [2024-12-07 17:49:00.288842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:27.042 [2024-12-07 17:49:00.288859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.288876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:27.042 [2024-12-07 17:49:00.288893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:32:27.042 [2024-12-07 17:49:00.288908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.316369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.316477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:27.042 [2024-12-07 17:49:00.316525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.332 ms 00:32:27.042 [2024-12-07 17:49:00.316546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.316641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.316663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:27.042 [2024-12-07 17:49:00.316687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:27.042 [2024-12-07 17:49:00.316706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.366841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.367002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:27.042 [2024-12-07 17:49:00.367062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.074 ms 00:32:27.042 [2024-12-07 17:49:00.367086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.367145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.367170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:27.042 [2024-12-07 17:49:00.367190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:27.042 [2024-12-07 17:49:00.367209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.367376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.367407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:27.042 [2024-12-07 17:49:00.367428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:27.042 [2024-12-07 17:49:00.367451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.367592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.367765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:27.042 [2024-12-07 17:49:00.367790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:32:27.042 [2024-12-07 17:49:00.367809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.383232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.383352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:27.042 [2024-12-07 17:49:00.383402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.389 ms 00:32:27.042 [2024-12-07 17:49:00.383426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.383585] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:27.042 [2024-12-07 17:49:00.383628] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:27.042 [2024-12-07 17:49:00.383711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.383735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:27.042 [2024-12-07 17:49:00.383756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:32:27.042 [2024-12-07 17:49:00.383775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.396090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.396209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:27.042 [2024-12-07 17:49:00.396262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.286 ms 00:32:27.042 [2024-12-07 17:49:00.396286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.396428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.396452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:27.042 [2024-12-07 17:49:00.396512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:27.042 [2024-12-07 17:49:00.396541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.396605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.042 [2024-12-07 17:49:00.396657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:27.042 [2024-12-07 17:49:00.396690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:27.042 [2024-12-07 17:49:00.396710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.042 [2024-12-07 17:49:00.397392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.397497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:27.043 [2024-12-07 17:49:00.397547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:32:27.043 [2024-12-07 17:49:00.397570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.397608] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:27.043 [2024-12-07 17:49:00.397640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.397661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:27.043 [2024-12-07 17:49:00.397682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:27.043 [2024-12-07 17:49:00.397701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.410687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:27.043 [2024-12-07 17:49:00.410939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.410970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:27.043 [2024-12-07 17:49:00.411073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.208 ms 00:32:27.043 [2024-12-07 17:49:00.411097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.413431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.413551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:27.043 [2024-12-07 17:49:00.413604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:32:27.043 [2024-12-07 17:49:00.413628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.413739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.413765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:27.043 [2024-12-07 17:49:00.413788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:27.043 [2024-12-07 17:49:00.413807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.413843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.413874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:27.043 [2024-12-07 17:49:00.413896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:27.043 [2024-12-07 17:49:00.413957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.043 [2024-12-07 17:49:00.414029] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:27.043 [2024-12-07 17:49:00.414164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.043 [2024-12-07 17:49:00.414191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:27.043 [2024-12-07 17:49:00.414211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:32:27.043 [2024-12-07 17:49:00.414232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.305 [2024-12-07 17:49:00.441338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.305 [2024-12-07 17:49:00.441503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:27.305 [2024-12-07 17:49:00.441563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.068 ms 00:32:27.305 [2024-12-07 17:49:00.441589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.305 [2024-12-07 17:49:00.441769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:27.305 [2024-12-07 17:49:00.441801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:27.305 [2024-12-07 17:49:00.441825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:27.305 [2024-12-07 17:49:00.441898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:27.305 [2024-12-07 17:49:00.443283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 165.056 ms, result 0 00:32:28.695  [2024-12-07T17:49:02.649Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-07T17:49:04.037Z] Copying: 23/1024 [MB] (12 MBps) [2024-12-07T17:49:04.981Z] Copying: 33/1024 [MB] (10 MBps) [2024-12-07T17:49:05.933Z] Copying: 44/1024 [MB] (10 MBps) [2024-12-07T17:49:06.876Z] Copying: 54/1024 [MB] (10 MBps) [2024-12-07T17:49:07.815Z] Copying: 64/1024 [MB] (10 MBps) [2024-12-07T17:49:08.757Z] Copying: 76/1024 [MB] (11 MBps) [2024-12-07T17:49:09.698Z] Copying: 88/1024 [MB] (11 MBps) [2024-12-07T17:49:10.641Z] Copying: 100/1024 [MB] (12 MBps) [2024-12-07T17:49:12.023Z] Copying: 110/1024 [MB] (10 MBps) [2024-12-07T17:49:12.967Z] Copying: 123/1024 [MB] (12 MBps) [2024-12-07T17:49:13.911Z] Copying: 136/1024 [MB] (13 MBps) [2024-12-07T17:49:14.857Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-07T17:49:15.799Z] Copying: 159/1024 [MB] (11 MBps) [2024-12-07T17:49:16.744Z] Copying: 171/1024 [MB] (12 MBps) [2024-12-07T17:49:17.690Z] Copying: 183/1024 [MB] (12 MBps) [2024-12-07T17:49:19.081Z] Copying: 196/1024 [MB] (12 MBps) [2024-12-07T17:49:19.656Z] Copying: 208/1024 [MB] (12 MBps) [2024-12-07T17:49:21.044Z] Copying: 218/1024 [MB] (10 MBps) [2024-12-07T17:49:21.989Z] Copying: 230/1024 [MB] (11 MBps) [2024-12-07T17:49:22.932Z] Copying: 241/1024 [MB] (11 MBps) [2024-12-07T17:49:23.875Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-07T17:49:24.819Z] Copying: 264/1024 [MB] (12 MBps) [2024-12-07T17:49:25.763Z] Copying: 275/1024 [MB] (10 MBps) [2024-12-07T17:49:26.707Z] Copying: 285/1024 [MB] (10 MBps) [2024-12-07T17:49:27.649Z] Copying: 298/1024 [MB] (12 MBps) [2024-12-07T17:49:29.036Z] Copying: 310/1024 [MB] (12 MBps) [2024-12-07T17:49:29.982Z] Copying: 323/1024 [MB] (12 MBps) [2024-12-07T17:49:30.928Z] Copying: 339/1024 [MB] (15 MBps) [2024-12-07T17:49:31.873Z] Copying: 351/1024 [MB] (12 MBps) [2024-12-07T17:49:32.815Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-07T17:49:33.758Z] Copying: 376/1024 [MB] (14 MBps) [2024-12-07T17:49:34.703Z] Copying: 389/1024 [MB] (13 MBps) [2024-12-07T17:49:35.741Z] Copying: 402/1024 [MB] (12 MBps) [2024-12-07T17:49:36.757Z] Copying: 415/1024 [MB] (13 MBps) [2024-12-07T17:49:37.703Z] Copying: 430/1024 [MB] (14 MBps) [2024-12-07T17:49:38.648Z] Copying: 446/1024 [MB] (15 MBps) [2024-12-07T17:49:40.036Z] Copying: 460/1024 [MB] (13 MBps) [2024-12-07T17:49:40.981Z] Copying: 474/1024 [MB] (14 MBps) [2024-12-07T17:49:41.923Z] Copying: 486/1024 [MB] (11 MBps) [2024-12-07T17:49:42.866Z] Copying: 497/1024 [MB] (11 MBps) [2024-12-07T17:49:43.808Z] Copying: 510/1024 [MB] (13 MBps) [2024-12-07T17:49:44.750Z] Copying: 522/1024 [MB] (11 MBps) [2024-12-07T17:49:45.702Z] Copying: 535/1024 [MB] (13 MBps) [2024-12-07T17:49:46.640Z] Copying: 549/1024 [MB] (14 MBps) [2024-12-07T17:49:48.040Z] Copying: 564/1024 [MB] (15 MBps) [2024-12-07T17:49:48.983Z] Copying: 578/1024 [MB] (14 MBps) [2024-12-07T17:49:49.924Z] Copying: 593/1024 [MB] (14 MBps) [2024-12-07T17:49:50.866Z] Copying: 609/1024 [MB] (15 MBps) [2024-12-07T17:49:51.808Z] Copying: 632/1024 [MB] (23 MBps) [2024-12-07T17:49:52.750Z] Copying: 648/1024 [MB] (15 MBps) [2024-12-07T17:49:53.693Z] Copying: 660/1024 [MB] (12 MBps) [2024-12-07T17:49:55.079Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-07T17:49:55.654Z] Copying: 685/1024 [MB] (14 MBps) [2024-12-07T17:49:57.034Z] Copying: 705/1024 [MB] (19 MBps) [2024-12-07T17:49:57.975Z] Copying: 719/1024 [MB] (13 MBps) [2024-12-07T17:49:58.930Z] Copying: 738/1024 [MB] (19 MBps) [2024-12-07T17:49:59.876Z] Copying: 753/1024 [MB] (15 MBps) [2024-12-07T17:50:00.860Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-07T17:50:01.805Z] Copying: 781/1024 [MB] (15 MBps) [2024-12-07T17:50:02.750Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-07T17:50:03.692Z] Copying: 808/1024 [MB] (16 MBps) [2024-12-07T17:50:05.077Z] Copying: 818/1024 [MB] (10 MBps) [2024-12-07T17:50:05.647Z] Copying: 830/1024 [MB] (11 MBps) [2024-12-07T17:50:07.033Z] Copying: 852/1024 [MB] (22 MBps) [2024-12-07T17:50:07.980Z] Copying: 863/1024 [MB] (10 MBps) [2024-12-07T17:50:08.924Z] Copying: 874/1024 [MB] (11 MBps) [2024-12-07T17:50:09.870Z] Copying: 894/1024 [MB] (19 MBps) [2024-12-07T17:50:10.810Z] Copying: 905/1024 [MB] (11 MBps) [2024-12-07T17:50:11.750Z] Copying: 918/1024 [MB] (12 MBps) [2024-12-07T17:50:12.689Z] Copying: 931/1024 [MB] (13 MBps) [2024-12-07T17:50:14.071Z] Copying: 951/1024 [MB] (19 MBps) [2024-12-07T17:50:15.010Z] Copying: 965/1024 [MB] (14 MBps) [2024-12-07T17:50:15.959Z] Copying: 979/1024 [MB] (13 MBps) [2024-12-07T17:50:16.899Z] Copying: 990/1024 [MB] (11 MBps) [2024-12-07T17:50:17.840Z] Copying: 1002/1024 [MB] (11 MBps) [2024-12-07T17:50:17.840Z] Copying: 1020/1024 [MB] (18 MBps) [2024-12-07T17:50:18.102Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-07 17:50:17.955206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.720 [2024-12-07 17:50:17.955300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:44.720 [2024-12-07 17:50:17.955327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:44.720 [2024-12-07 17:50:17.955342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.720 [2024-12-07 17:50:17.955389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:44.720 [2024-12-07 17:50:17.960196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.720 [2024-12-07 17:50:17.960246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:44.720 [2024-12-07 17:50:17.960265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.781 ms 00:33:44.720 [2024-12-07 17:50:17.960279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.720 [2024-12-07 17:50:17.960652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.720 [2024-12-07 17:50:17.960667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:44.720 [2024-12-07 17:50:17.960682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:33:44.720 [2024-12-07 17:50:17.960696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.720 [2024-12-07 17:50:17.960750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.720 [2024-12-07 17:50:17.960764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:44.720 [2024-12-07 17:50:17.960777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:44.720 [2024-12-07 17:50:17.960790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.720 [2024-12-07 17:50:17.960873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.720 [2024-12-07 17:50:17.960887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:44.720 [2024-12-07 17:50:17.960900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:44.720 [2024-12-07 17:50:17.960912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.720 [2024-12-07 17:50:17.960934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:44.720 [2024-12-07 17:50:17.960955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.960975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:44.720 [2024-12-07 17:50:17.961953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.961961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.961969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.962004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.962012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.962021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.962028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:44.721 [2024-12-07 17:50:17.962044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:44.721 [2024-12-07 17:50:17.962053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:33:44.721 [2024-12-07 17:50:17.962062] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:44.721 [2024-12-07 17:50:17.962072] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:44.721 [2024-12-07 17:50:17.962080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:44.721 [2024-12-07 17:50:17.962089] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:44.721 [2024-12-07 17:50:17.962097] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:44.721 [2024-12-07 17:50:17.962106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:44.721 [2024-12-07 17:50:17.962118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:44.721 [2024-12-07 17:50:17.962124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:44.721 [2024-12-07 17:50:17.962131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:44.721 [2024-12-07 17:50:17.962139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.721 [2024-12-07 17:50:17.962147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:44.721 [2024-12-07 17:50:17.962156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:33:44.721 [2024-12-07 17:50:17.962166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:17.978030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.721 [2024-12-07 17:50:17.978250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:44.721 [2024-12-07 17:50:17.978274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.844 ms 00:33:44.721 [2024-12-07 17:50:17.978284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:17.978716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:44.721 [2024-12-07 17:50:17.978727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:44.721 [2024-12-07 17:50:17.978745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:33:44.721 [2024-12-07 17:50:17.978754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:18.018468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.721 [2024-12-07 17:50:18.018639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:44.721 [2024-12-07 17:50:18.018704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.721 [2024-12-07 17:50:18.018731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:18.018830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.721 [2024-12-07 17:50:18.018860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:44.721 [2024-12-07 17:50:18.018889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.721 [2024-12-07 17:50:18.018910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:18.019022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.721 [2024-12-07 17:50:18.019178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:44.721 [2024-12-07 17:50:18.019204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.721 [2024-12-07 17:50:18.019225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.721 [2024-12-07 17:50:18.019261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.721 [2024-12-07 17:50:18.019283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:44.721 [2024-12-07 17:50:18.019303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.721 [2024-12-07 17:50:18.019331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.113069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.113266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:44.981 [2024-12-07 17:50:18.113361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.113390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.189497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.189699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:44.981 [2024-12-07 17:50:18.189759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.189794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.189928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.189956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:44.981 [2024-12-07 17:50:18.190006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.190093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.190118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:44.981 [2024-12-07 17:50:18.190221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.190367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.190395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:44.981 [2024-12-07 17:50:18.190415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.190477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.190506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:44.981 [2024-12-07 17:50:18.190527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.190683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.190708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:44.981 [2024-12-07 17:50:18.190730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.190820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:44.981 [2024-12-07 17:50:18.190846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:44.981 [2024-12-07 17:50:18.190867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:44.981 [2024-12-07 17:50:18.190888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:44.981 [2024-12-07 17:50:18.191085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 235.844 ms, result 0 00:33:45.921 00:33:45.921 00:33:45.921 17:50:19 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:47.837 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:47.837 17:50:21 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:48.099 [2024-12-07 17:50:21.230416] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:33:48.099 [2024-12-07 17:50:21.230705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85678 ] 00:33:48.099 [2024-12-07 17:50:21.389720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.359 [2024-12-07 17:50:21.529936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.620 [2024-12-07 17:50:21.870546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:48.620 [2024-12-07 17:50:21.870646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:48.883 [2024-12-07 17:50:22.037478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.037545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:48.883 [2024-12-07 17:50:22.037563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:48.883 [2024-12-07 17:50:22.037573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.037637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.037652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:48.883 [2024-12-07 17:50:22.037662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:48.883 [2024-12-07 17:50:22.037670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.037692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:48.883 [2024-12-07 17:50:22.038477] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:48.883 [2024-12-07 17:50:22.038511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.038521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:48.883 [2024-12-07 17:50:22.038530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:33:48.883 [2024-12-07 17:50:22.038539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.038866] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:48.883 [2024-12-07 17:50:22.038897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.038911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:48.883 [2024-12-07 17:50:22.038921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:48.883 [2024-12-07 17:50:22.038930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.039016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.039029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:48.883 [2024-12-07 17:50:22.039039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:48.883 [2024-12-07 17:50:22.039047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.039383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.039398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:48.883 [2024-12-07 17:50:22.039408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:33:48.883 [2024-12-07 17:50:22.039416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.039493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.039504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:48.883 [2024-12-07 17:50:22.039514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:48.883 [2024-12-07 17:50:22.039524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.039549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.039557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:48.883 [2024-12-07 17:50:22.039569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:48.883 [2024-12-07 17:50:22.039578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.039600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:48.883 [2024-12-07 17:50:22.044660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.044706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:48.883 [2024-12-07 17:50:22.044718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:33:48.883 [2024-12-07 17:50:22.044727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.044773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.044783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:48.883 [2024-12-07 17:50:22.044792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:48.883 [2024-12-07 17:50:22.044799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.044862] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:48.883 [2024-12-07 17:50:22.044890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:48.883 [2024-12-07 17:50:22.044933] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:48.883 [2024-12-07 17:50:22.044950] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:48.883 [2024-12-07 17:50:22.045291] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:48.883 [2024-12-07 17:50:22.045366] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:48.883 [2024-12-07 17:50:22.045401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:48.883 [2024-12-07 17:50:22.045435] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:48.883 [2024-12-07 17:50:22.045535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:48.883 [2024-12-07 17:50:22.045553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:48.883 [2024-12-07 17:50:22.045563] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:48.883 [2024-12-07 17:50:22.045570] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:48.883 [2024-12-07 17:50:22.045578] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:48.883 [2024-12-07 17:50:22.045587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.045596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:48.883 [2024-12-07 17:50:22.045606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:33:48.883 [2024-12-07 17:50:22.045614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.045742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.883 [2024-12-07 17:50:22.045758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:48.883 [2024-12-07 17:50:22.045767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:33:48.883 [2024-12-07 17:50:22.045779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.883 [2024-12-07 17:50:22.045905] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:48.883 [2024-12-07 17:50:22.045918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:48.883 [2024-12-07 17:50:22.045927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:48.883 [2024-12-07 17:50:22.045936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.883 [2024-12-07 17:50:22.045947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:48.883 [2024-12-07 17:50:22.045955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:48.883 [2024-12-07 17:50:22.045963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:48.883 [2024-12-07 17:50:22.045971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:48.883 [2024-12-07 17:50:22.045997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:48.883 [2024-12-07 17:50:22.046006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:48.883 [2024-12-07 17:50:22.046013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:48.883 [2024-12-07 17:50:22.046020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:48.883 [2024-12-07 17:50:22.046027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:48.883 [2024-12-07 17:50:22.046034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:48.884 [2024-12-07 17:50:22.046042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:48.884 [2024-12-07 17:50:22.046058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:48.884 [2024-12-07 17:50:22.046072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:48.884 [2024-12-07 17:50:22.046094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:48.884 [2024-12-07 17:50:22.046117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:48.884 [2024-12-07 17:50:22.046147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:48.884 [2024-12-07 17:50:22.046172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:48.884 [2024-12-07 17:50:22.046200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:48.884 [2024-12-07 17:50:22.046215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:48.884 [2024-12-07 17:50:22.046229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:48.884 [2024-12-07 17:50:22.046237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:48.884 [2024-12-07 17:50:22.046245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:48.884 [2024-12-07 17:50:22.046260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:48.884 [2024-12-07 17:50:22.046266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:48.884 [2024-12-07 17:50:22.046287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:48.884 [2024-12-07 17:50:22.046301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046308] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:48.884 [2024-12-07 17:50:22.046317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:48.884 [2024-12-07 17:50:22.046324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:48.884 [2024-12-07 17:50:22.046343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:48.884 [2024-12-07 17:50:22.046356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:48.884 [2024-12-07 17:50:22.046362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:48.884 [2024-12-07 17:50:22.046369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:48.884 [2024-12-07 17:50:22.046376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:48.884 [2024-12-07 17:50:22.046382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:48.884 [2024-12-07 17:50:22.046393] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:48.884 [2024-12-07 17:50:22.046403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:48.884 [2024-12-07 17:50:22.046419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:48.884 [2024-12-07 17:50:22.046427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:48.884 [2024-12-07 17:50:22.046435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:48.884 [2024-12-07 17:50:22.046442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:48.884 [2024-12-07 17:50:22.046449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:48.884 [2024-12-07 17:50:22.046456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:48.884 [2024-12-07 17:50:22.046463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:48.884 [2024-12-07 17:50:22.046471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:48.884 [2024-12-07 17:50:22.046478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:48.884 [2024-12-07 17:50:22.046517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:48.884 [2024-12-07 17:50:22.046525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:48.884 [2024-12-07 17:50:22.046541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:48.884 [2024-12-07 17:50:22.046548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:48.884 [2024-12-07 17:50:22.046556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:48.884 [2024-12-07 17:50:22.046563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.884 [2024-12-07 17:50:22.046572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:48.884 [2024-12-07 17:50:22.046581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:33:48.884 [2024-12-07 17:50:22.046589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.884 [2024-12-07 17:50:22.077200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.884 [2024-12-07 17:50:22.077233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:48.884 [2024-12-07 17:50:22.077244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.568 ms 00:33:48.884 [2024-12-07 17:50:22.077251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.884 [2024-12-07 17:50:22.077351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.884 [2024-12-07 17:50:22.077360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:48.884 [2024-12-07 17:50:22.077372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:33:48.885 [2024-12-07 17:50:22.077380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.126606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.126647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:48.885 [2024-12-07 17:50:22.126660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.178 ms 00:33:48.885 [2024-12-07 17:50:22.126669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.126710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.126721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:48.885 [2024-12-07 17:50:22.126729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:48.885 [2024-12-07 17:50:22.126737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.126832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.126844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:48.885 [2024-12-07 17:50:22.126852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:48.885 [2024-12-07 17:50:22.126860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.127001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.127014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:48.885 [2024-12-07 17:50:22.127023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:33:48.885 [2024-12-07 17:50:22.127030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.141840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.141873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:48.885 [2024-12-07 17:50:22.141883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:33:48.885 [2024-12-07 17:50:22.141891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.142053] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:48.885 [2024-12-07 17:50:22.142068] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:48.885 [2024-12-07 17:50:22.142081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.142090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:48.885 [2024-12-07 17:50:22.142099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:48.885 [2024-12-07 17:50:22.142108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.154373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.154406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:48.885 [2024-12-07 17:50:22.154416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.246 ms 00:33:48.885 [2024-12-07 17:50:22.154424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.154546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.154555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:48.885 [2024-12-07 17:50:22.154563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:33:48.885 [2024-12-07 17:50:22.154574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.154619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.154628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:48.885 [2024-12-07 17:50:22.154643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:48.885 [2024-12-07 17:50:22.154651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.155241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.155255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:48.885 [2024-12-07 17:50:22.155263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:33:48.885 [2024-12-07 17:50:22.155271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.155293] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:48.885 [2024-12-07 17:50:22.155303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.155311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:48.885 [2024-12-07 17:50:22.155319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:48.885 [2024-12-07 17:50:22.155327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.167500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:48.885 [2024-12-07 17:50:22.167640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.167651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:48.885 [2024-12-07 17:50:22.167661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:33:48.885 [2024-12-07 17:50:22.167668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.169861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.169888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:48.885 [2024-12-07 17:50:22.169897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:33:48.885 [2024-12-07 17:50:22.169905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.170006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.170016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:48.885 [2024-12-07 17:50:22.170025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:48.885 [2024-12-07 17:50:22.170033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.170056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.170069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:48.885 [2024-12-07 17:50:22.170077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:48.885 [2024-12-07 17:50:22.170085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.170116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:48.885 [2024-12-07 17:50:22.170126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.170134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:48.885 [2024-12-07 17:50:22.170142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:48.885 [2024-12-07 17:50:22.170150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.195819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.195974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:48.885 [2024-12-07 17:50:22.196006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.648 ms 00:33:48.885 [2024-12-07 17:50:22.196015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.885 [2024-12-07 17:50:22.196086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:48.885 [2024-12-07 17:50:22.196096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:48.885 [2024-12-07 17:50:22.196105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:48.885 [2024-12-07 17:50:22.196113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:48.886 [2024-12-07 17:50:22.197241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.305 ms, result 0 00:33:49.879  [2024-12-07T17:50:24.638Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-07T17:50:25.579Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-07T17:50:26.517Z] Copying: 54/1024 [MB] (19 MBps) [2024-12-07T17:50:27.459Z] Copying: 72/1024 [MB] (18 MBps) [2024-12-07T17:50:28.402Z] Copying: 97/1024 [MB] (24 MBps) [2024-12-07T17:50:29.341Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-07T17:50:30.288Z] Copying: 149/1024 [MB] (26 MBps) [2024-12-07T17:50:31.232Z] Copying: 170/1024 [MB] (21 MBps) [2024-12-07T17:50:32.619Z] Copying: 195/1024 [MB] (24 MBps) [2024-12-07T17:50:33.562Z] Copying: 214/1024 [MB] (19 MBps) [2024-12-07T17:50:34.571Z] Copying: 231/1024 [MB] (17 MBps) [2024-12-07T17:50:35.524Z] Copying: 253/1024 [MB] (21 MBps) [2024-12-07T17:50:36.465Z] Copying: 276/1024 [MB] (22 MBps) [2024-12-07T17:50:37.407Z] Copying: 294/1024 [MB] (18 MBps) [2024-12-07T17:50:38.348Z] Copying: 315/1024 [MB] (21 MBps) [2024-12-07T17:50:39.292Z] Copying: 334/1024 [MB] (18 MBps) [2024-12-07T17:50:40.232Z] Copying: 354/1024 [MB] (19 MBps) [2024-12-07T17:50:41.615Z] Copying: 378/1024 [MB] (24 MBps) [2024-12-07T17:50:42.559Z] Copying: 407/1024 [MB] (29 MBps) [2024-12-07T17:50:43.505Z] Copying: 431/1024 [MB] (23 MBps) [2024-12-07T17:50:44.448Z] Copying: 450/1024 [MB] (18 MBps) [2024-12-07T17:50:45.392Z] Copying: 462/1024 [MB] (12 MBps) [2024-12-07T17:50:46.331Z] Copying: 475/1024 [MB] (12 MBps) [2024-12-07T17:50:47.274Z] Copying: 489/1024 [MB] (14 MBps) [2024-12-07T17:50:48.219Z] Copying: 505/1024 [MB] (15 MBps) [2024-12-07T17:50:49.605Z] Copying: 529/1024 [MB] (23 MBps) [2024-12-07T17:50:50.551Z] Copying: 555/1024 [MB] (26 MBps) [2024-12-07T17:50:51.497Z] Copying: 573/1024 [MB] (17 MBps) [2024-12-07T17:50:52.439Z] Copying: 597520/1048576 [kB] (10160 kBps) [2024-12-07T17:50:53.379Z] Copying: 595/1024 [MB] (12 MBps) [2024-12-07T17:50:54.319Z] Copying: 610/1024 [MB] (14 MBps) [2024-12-07T17:50:55.263Z] Copying: 635/1024 [MB] (25 MBps) [2024-12-07T17:50:56.647Z] Copying: 647/1024 [MB] (12 MBps) [2024-12-07T17:50:57.218Z] Copying: 663/1024 [MB] (15 MBps) [2024-12-07T17:50:58.605Z] Copying: 679/1024 [MB] (16 MBps) [2024-12-07T17:50:59.549Z] Copying: 691/1024 [MB] (12 MBps) [2024-12-07T17:51:00.493Z] Copying: 705/1024 [MB] (14 MBps) [2024-12-07T17:51:01.434Z] Copying: 720/1024 [MB] (14 MBps) [2024-12-07T17:51:02.375Z] Copying: 734/1024 [MB] (13 MBps) [2024-12-07T17:51:03.318Z] Copying: 748/1024 [MB] (14 MBps) [2024-12-07T17:51:04.261Z] Copying: 761/1024 [MB] (12 MBps) [2024-12-07T17:51:05.280Z] Copying: 771/1024 [MB] (10 MBps) [2024-12-07T17:51:06.265Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-07T17:51:07.649Z] Copying: 792/1024 [MB] (10 MBps) [2024-12-07T17:51:08.223Z] Copying: 803/1024 [MB] (10 MBps) [2024-12-07T17:51:09.608Z] Copying: 816/1024 [MB] (13 MBps) [2024-12-07T17:51:10.554Z] Copying: 829/1024 [MB] (12 MBps) [2024-12-07T17:51:11.499Z] Copying: 842/1024 [MB] (12 MBps) [2024-12-07T17:51:12.443Z] Copying: 855/1024 [MB] (12 MBps) [2024-12-07T17:51:13.384Z] Copying: 868/1024 [MB] (12 MBps) [2024-12-07T17:51:14.325Z] Copying: 880/1024 [MB] (12 MBps) [2024-12-07T17:51:15.266Z] Copying: 893/1024 [MB] (12 MBps) [2024-12-07T17:51:16.648Z] Copying: 907/1024 [MB] (13 MBps) [2024-12-07T17:51:17.219Z] Copying: 919/1024 [MB] (11 MBps) [2024-12-07T17:51:18.604Z] Copying: 951528/1048576 [kB] (10140 kBps) [2024-12-07T17:51:19.544Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-07T17:51:20.482Z] Copying: 952/1024 [MB] (12 MBps) [2024-12-07T17:51:21.446Z] Copying: 965/1024 [MB] (12 MBps) [2024-12-07T17:51:22.391Z] Copying: 977/1024 [MB] (12 MBps) [2024-12-07T17:51:23.338Z] Copying: 990/1024 [MB] (12 MBps) [2024-12-07T17:51:24.284Z] Copying: 1002/1024 [MB] (12 MBps) [2024-12-07T17:51:25.234Z] Copying: 1014/1024 [MB] (12 MBps) [2024-12-07T17:51:26.177Z] Copying: 1047924/1048576 [kB] (8596 kBps) [2024-12-07T17:51:26.177Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-07 17:51:25.956606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.795 [2024-12-07 17:51:25.956690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:52.795 [2024-12-07 17:51:25.956708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:52.795 [2024-12-07 17:51:25.956718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.795 [2024-12-07 17:51:25.960574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:52.795 [2024-12-07 17:51:25.965119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.795 [2024-12-07 17:51:25.965168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:52.795 [2024-12-07 17:51:25.965182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.485 ms 00:34:52.795 [2024-12-07 17:51:25.965190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.795 [2024-12-07 17:51:25.976177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.795 [2024-12-07 17:51:25.976227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:52.795 [2024-12-07 17:51:25.976240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.763 ms 00:34:52.795 [2024-12-07 17:51:25.976249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.795 [2024-12-07 17:51:25.976279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.795 [2024-12-07 17:51:25.976290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:52.795 [2024-12-07 17:51:25.976299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:52.795 [2024-12-07 17:51:25.976307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.795 [2024-12-07 17:51:25.976374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.795 [2024-12-07 17:51:25.976387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:52.795 [2024-12-07 17:51:25.976396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:34:52.795 [2024-12-07 17:51:25.976404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.795 [2024-12-07 17:51:25.976418] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:52.795 [2024-12-07 17:51:25.976431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127488 / 261120 wr_cnt: 1 state: open 00:34:52.795 [2024-12-07 17:51:25.976441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.976972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:52.795 [2024-12-07 17:51:25.977131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:52.796 [2024-12-07 17:51:25.977266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:52.796 [2024-12-07 17:51:25.977275] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:34:52.796 [2024-12-07 17:51:25.977283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127488 00:34:52.796 [2024-12-07 17:51:25.977291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127520 00:34:52.796 [2024-12-07 17:51:25.977298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127488 00:34:52.796 [2024-12-07 17:51:25.977306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:52.796 [2024-12-07 17:51:25.977346] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:52.796 [2024-12-07 17:51:25.977355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:52.796 [2024-12-07 17:51:25.977364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:52.796 [2024-12-07 17:51:25.977372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:52.796 [2024-12-07 17:51:25.977379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:52.796 [2024-12-07 17:51:25.977387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.796 [2024-12-07 17:51:25.977395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:52.796 [2024-12-07 17:51:25.977404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:34:52.796 [2024-12-07 17:51:25.977411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:25.991445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.796 [2024-12-07 17:51:25.991633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:52.796 [2024-12-07 17:51:25.991662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.015 ms 00:34:52.796 [2024-12-07 17:51:25.991670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:25.992115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.796 [2024-12-07 17:51:25.992141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:52.796 [2024-12-07 17:51:25.992152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:34:52.796 [2024-12-07 17:51:25.992160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:26.029525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.796 [2024-12-07 17:51:26.029582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:52.796 [2024-12-07 17:51:26.029597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.796 [2024-12-07 17:51:26.029606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:26.029678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.796 [2024-12-07 17:51:26.029689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:52.796 [2024-12-07 17:51:26.029699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.796 [2024-12-07 17:51:26.029708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:26.029766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.796 [2024-12-07 17:51:26.029777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:52.796 [2024-12-07 17:51:26.029792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.796 [2024-12-07 17:51:26.029800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:26.029817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.796 [2024-12-07 17:51:26.029826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:52.796 [2024-12-07 17:51:26.029834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.796 [2024-12-07 17:51:26.029842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.796 [2024-12-07 17:51:26.114366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.796 [2024-12-07 17:51:26.114429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:52.796 [2024-12-07 17:51:26.114443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.796 [2024-12-07 17:51:26.114451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:53.057 [2024-12-07 17:51:26.184103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:53.057 [2024-12-07 17:51:26.184218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:53.057 [2024-12-07 17:51:26.184290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:53.057 [2024-12-07 17:51:26.184400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:53.057 [2024-12-07 17:51:26.184463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:53.057 [2024-12-07 17:51:26.184538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:53.057 [2024-12-07 17:51:26.184610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:53.057 [2024-12-07 17:51:26.184618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:53.057 [2024-12-07 17:51:26.184626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:53.057 [2024-12-07 17:51:26.184767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 229.784 ms, result 0 00:34:54.969 00:34:54.969 00:34:54.969 17:51:27 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:54.969 [2024-12-07 17:51:28.035319] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:34:54.969 [2024-12-07 17:51:28.035465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86335 ] 00:34:54.969 [2024-12-07 17:51:28.199902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.969 [2024-12-07 17:51:28.329459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.541 [2024-12-07 17:51:28.627558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:55.541 [2024-12-07 17:51:28.627650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:55.541 [2024-12-07 17:51:28.789791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.789857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:55.541 [2024-12-07 17:51:28.789873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:55.541 [2024-12-07 17:51:28.789883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.789940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.789953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:55.541 [2024-12-07 17:51:28.789963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:55.541 [2024-12-07 17:51:28.789971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.790031] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:55.541 [2024-12-07 17:51:28.790750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:55.541 [2024-12-07 17:51:28.790772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.790781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:55.541 [2024-12-07 17:51:28.790791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:34:55.541 [2024-12-07 17:51:28.790799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.791138] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:55.541 [2024-12-07 17:51:28.791168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.791181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:55.541 [2024-12-07 17:51:28.791191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:55.541 [2024-12-07 17:51:28.791200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.791256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.791266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:55.541 [2024-12-07 17:51:28.791275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:55.541 [2024-12-07 17:51:28.791282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.791602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.541 [2024-12-07 17:51:28.791615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:55.541 [2024-12-07 17:51:28.791624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:34:55.541 [2024-12-07 17:51:28.791632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.541 [2024-12-07 17:51:28.791701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.791711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:55.542 [2024-12-07 17:51:28.791720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:55.542 [2024-12-07 17:51:28.791728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.791750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.791759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:55.542 [2024-12-07 17:51:28.791771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:55.542 [2024-12-07 17:51:28.791779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.791797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:55.542 [2024-12-07 17:51:28.796192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.796233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:55.542 [2024-12-07 17:51:28.796244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.399 ms 00:34:55.542 [2024-12-07 17:51:28.796252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.796299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.796309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:55.542 [2024-12-07 17:51:28.796317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:55.542 [2024-12-07 17:51:28.796325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.796380] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:55.542 [2024-12-07 17:51:28.796404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:55.542 [2024-12-07 17:51:28.796444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:55.542 [2024-12-07 17:51:28.796461] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:55.542 [2024-12-07 17:51:28.796567] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:55.542 [2024-12-07 17:51:28.796578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:55.542 [2024-12-07 17:51:28.796589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:55.542 [2024-12-07 17:51:28.796601] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:55.542 [2024-12-07 17:51:28.796610] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:55.542 [2024-12-07 17:51:28.796622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:55.542 [2024-12-07 17:51:28.796630] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:55.542 [2024-12-07 17:51:28.796637] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:55.542 [2024-12-07 17:51:28.796645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:55.542 [2024-12-07 17:51:28.796654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.796661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:55.542 [2024-12-07 17:51:28.796669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:34:55.542 [2024-12-07 17:51:28.796676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.796759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.542 [2024-12-07 17:51:28.796768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:55.542 [2024-12-07 17:51:28.796776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:55.542 [2024-12-07 17:51:28.796786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.542 [2024-12-07 17:51:28.796888] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:55.542 [2024-12-07 17:51:28.796899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:55.542 [2024-12-07 17:51:28.796908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:55.542 [2024-12-07 17:51:28.796916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.796924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:55.542 [2024-12-07 17:51:28.796931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.796938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:55.542 [2024-12-07 17:51:28.796947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:55.542 [2024-12-07 17:51:28.796954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:55.542 [2024-12-07 17:51:28.796961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:55.542 [2024-12-07 17:51:28.796968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:55.542 [2024-12-07 17:51:28.796977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:55.542 [2024-12-07 17:51:28.797010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:55.542 [2024-12-07 17:51:28.797018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:55.542 [2024-12-07 17:51:28.797025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:55.542 [2024-12-07 17:51:28.797039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:55.542 [2024-12-07 17:51:28.797054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:55.542 [2024-12-07 17:51:28.797075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:55.542 [2024-12-07 17:51:28.797096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:55.542 [2024-12-07 17:51:28.797116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:55.542 [2024-12-07 17:51:28.797137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:55.542 [2024-12-07 17:51:28.797158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:55.542 [2024-12-07 17:51:28.797171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:55.542 [2024-12-07 17:51:28.797177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:55.542 [2024-12-07 17:51:28.797183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:55.542 [2024-12-07 17:51:28.797189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:55.542 [2024-12-07 17:51:28.797196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:55.542 [2024-12-07 17:51:28.797202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:55.542 [2024-12-07 17:51:28.797215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:55.542 [2024-12-07 17:51:28.797222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797231] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:55.542 [2024-12-07 17:51:28.797239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:55.542 [2024-12-07 17:51:28.797247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:55.542 [2024-12-07 17:51:28.797265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:55.542 [2024-12-07 17:51:28.797280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:55.542 [2024-12-07 17:51:28.797287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:55.542 [2024-12-07 17:51:28.797294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:55.542 [2024-12-07 17:51:28.797301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:55.542 [2024-12-07 17:51:28.797307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:55.542 [2024-12-07 17:51:28.797332] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:55.542 [2024-12-07 17:51:28.797343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:55.542 [2024-12-07 17:51:28.797351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:55.542 [2024-12-07 17:51:28.797360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:55.542 [2024-12-07 17:51:28.797367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:55.542 [2024-12-07 17:51:28.797375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:55.542 [2024-12-07 17:51:28.797383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:55.542 [2024-12-07 17:51:28.797390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:55.542 [2024-12-07 17:51:28.797397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:55.542 [2024-12-07 17:51:28.797405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:55.542 [2024-12-07 17:51:28.797412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:55.542 [2024-12-07 17:51:28.797419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:55.543 [2024-12-07 17:51:28.797455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:55.543 [2024-12-07 17:51:28.797464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:55.543 [2024-12-07 17:51:28.797480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:55.543 [2024-12-07 17:51:28.797488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:55.543 [2024-12-07 17:51:28.797495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:55.543 [2024-12-07 17:51:28.797503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.797511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:55.543 [2024-12-07 17:51:28.797519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:34:55.543 [2024-12-07 17:51:28.797527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.825635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.825683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:55.543 [2024-12-07 17:51:28.825696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.063 ms 00:34:55.543 [2024-12-07 17:51:28.825704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.825796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.825806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:55.543 [2024-12-07 17:51:28.825819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:55.543 [2024-12-07 17:51:28.825827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.877141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.877193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:55.543 [2024-12-07 17:51:28.877206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.255 ms 00:34:55.543 [2024-12-07 17:51:28.877215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.877271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.877281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:55.543 [2024-12-07 17:51:28.877291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:55.543 [2024-12-07 17:51:28.877299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.877444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.877457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:55.543 [2024-12-07 17:51:28.877465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:34:55.543 [2024-12-07 17:51:28.877473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.877607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.877620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:55.543 [2024-12-07 17:51:28.877629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:55.543 [2024-12-07 17:51:28.877638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.893576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.893621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:55.543 [2024-12-07 17:51:28.893633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:34:55.543 [2024-12-07 17:51:28.893641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.893801] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:55.543 [2024-12-07 17:51:28.893815] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:55.543 [2024-12-07 17:51:28.893829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.893837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:55.543 [2024-12-07 17:51:28.893846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:34:55.543 [2024-12-07 17:51:28.893853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.906170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.906217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:55.543 [2024-12-07 17:51:28.906229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:34:55.543 [2024-12-07 17:51:28.906237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.906361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.906371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:55.543 [2024-12-07 17:51:28.906379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:34:55.543 [2024-12-07 17:51:28.906393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.906448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.906458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:55.543 [2024-12-07 17:51:28.906467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:55.543 [2024-12-07 17:51:28.906482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.907142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.907175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:55.543 [2024-12-07 17:51:28.907190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:34:55.543 [2024-12-07 17:51:28.907202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.543 [2024-12-07 17:51:28.907244] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:55.543 [2024-12-07 17:51:28.907267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.543 [2024-12-07 17:51:28.907280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:55.543 [2024-12-07 17:51:28.907295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:34:55.543 [2024-12-07 17:51:28.907308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.919912] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:55.805 [2024-12-07 17:51:28.920100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.920113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:55.805 [2024-12-07 17:51:28.920125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:34:55.805 [2024-12-07 17:51:28.920157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.922487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.922528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:55.805 [2024-12-07 17:51:28.922539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:34:55.805 [2024-12-07 17:51:28.922548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.922632] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:55.805 [2024-12-07 17:51:28.923106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.923117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:55.805 [2024-12-07 17:51:28.923127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:34:55.805 [2024-12-07 17:51:28.923134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.923167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.923176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:55.805 [2024-12-07 17:51:28.923184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:55.805 [2024-12-07 17:51:28.923191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.923227] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:55.805 [2024-12-07 17:51:28.923237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.923245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:55.805 [2024-12-07 17:51:28.923254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:55.805 [2024-12-07 17:51:28.923262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.950037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.950091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:55.805 [2024-12-07 17:51:28.950104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.754 ms 00:34:55.805 [2024-12-07 17:51:28.950114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.950202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.805 [2024-12-07 17:51:28.950211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:55.805 [2024-12-07 17:51:28.950221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:55.805 [2024-12-07 17:51:28.950229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.805 [2024-12-07 17:51:28.951481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 161.209 ms, result 0 00:34:57.191  [2024-12-07T17:51:31.517Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-07T17:51:32.464Z] Copying: 29/1024 [MB] (17 MBps) [2024-12-07T17:51:33.411Z] Copying: 49/1024 [MB] (20 MBps) [2024-12-07T17:51:34.357Z] Copying: 60/1024 [MB] (11 MBps) [2024-12-07T17:51:35.301Z] Copying: 72/1024 [MB] (11 MBps) [2024-12-07T17:51:36.241Z] Copying: 83/1024 [MB] (11 MBps) [2024-12-07T17:51:37.207Z] Copying: 98/1024 [MB] (14 MBps) [2024-12-07T17:51:38.194Z] Copying: 118/1024 [MB] (19 MBps) [2024-12-07T17:51:39.579Z] Copying: 129/1024 [MB] (11 MBps) [2024-12-07T17:51:40.152Z] Copying: 140/1024 [MB] (10 MBps) [2024-12-07T17:51:41.541Z] Copying: 151/1024 [MB] (10 MBps) [2024-12-07T17:51:42.488Z] Copying: 162/1024 [MB] (11 MBps) [2024-12-07T17:51:43.436Z] Copying: 175/1024 [MB] (12 MBps) [2024-12-07T17:51:44.382Z] Copying: 188/1024 [MB] (13 MBps) [2024-12-07T17:51:45.322Z] Copying: 201/1024 [MB] (13 MBps) [2024-12-07T17:51:46.259Z] Copying: 219/1024 [MB] (17 MBps) [2024-12-07T17:51:47.197Z] Copying: 232/1024 [MB] (13 MBps) [2024-12-07T17:51:48.577Z] Copying: 252/1024 [MB] (19 MBps) [2024-12-07T17:51:49.516Z] Copying: 272/1024 [MB] (20 MBps) [2024-12-07T17:51:50.457Z] Copying: 289/1024 [MB] (16 MBps) [2024-12-07T17:51:51.400Z] Copying: 309/1024 [MB] (19 MBps) [2024-12-07T17:51:52.340Z] Copying: 326/1024 [MB] (17 MBps) [2024-12-07T17:51:53.284Z] Copying: 340/1024 [MB] (13 MBps) [2024-12-07T17:51:54.226Z] Copying: 351/1024 [MB] (10 MBps) [2024-12-07T17:51:55.170Z] Copying: 361/1024 [MB] (10 MBps) [2024-12-07T17:51:56.552Z] Copying: 372/1024 [MB] (10 MBps) [2024-12-07T17:51:57.494Z] Copying: 383/1024 [MB] (10 MBps) [2024-12-07T17:51:58.439Z] Copying: 393/1024 [MB] (10 MBps) [2024-12-07T17:51:59.387Z] Copying: 404/1024 [MB] (10 MBps) [2024-12-07T17:52:00.332Z] Copying: 424/1024 [MB] (20 MBps) [2024-12-07T17:52:01.276Z] Copying: 435/1024 [MB] (10 MBps) [2024-12-07T17:52:02.218Z] Copying: 448/1024 [MB] (12 MBps) [2024-12-07T17:52:03.164Z] Copying: 468/1024 [MB] (19 MBps) [2024-12-07T17:52:04.552Z] Copying: 488/1024 [MB] (20 MBps) [2024-12-07T17:52:05.497Z] Copying: 511/1024 [MB] (22 MBps) [2024-12-07T17:52:06.438Z] Copying: 532/1024 [MB] (21 MBps) [2024-12-07T17:52:07.375Z] Copying: 560/1024 [MB] (27 MBps) [2024-12-07T17:52:08.311Z] Copying: 586/1024 [MB] (26 MBps) [2024-12-07T17:52:09.307Z] Copying: 610/1024 [MB] (23 MBps) [2024-12-07T17:52:10.247Z] Copying: 631/1024 [MB] (20 MBps) [2024-12-07T17:52:11.190Z] Copying: 653/1024 [MB] (21 MBps) [2024-12-07T17:52:12.574Z] Copying: 666/1024 [MB] (13 MBps) [2024-12-07T17:52:13.516Z] Copying: 683/1024 [MB] (17 MBps) [2024-12-07T17:52:14.459Z] Copying: 704/1024 [MB] (21 MBps) [2024-12-07T17:52:15.397Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-07T17:52:16.332Z] Copying: 725/1024 [MB] (10 MBps) [2024-12-07T17:52:17.271Z] Copying: 735/1024 [MB] (10 MBps) [2024-12-07T17:52:18.260Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-07T17:52:19.197Z] Copying: 756/1024 [MB] (10 MBps) [2024-12-07T17:52:20.582Z] Copying: 776/1024 [MB] (19 MBps) [2024-12-07T17:52:21.156Z] Copying: 787/1024 [MB] (11 MBps) [2024-12-07T17:52:22.541Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-07T17:52:23.486Z] Copying: 808/1024 [MB] (10 MBps) [2024-12-07T17:52:24.433Z] Copying: 823/1024 [MB] (14 MBps) [2024-12-07T17:52:25.371Z] Copying: 833/1024 [MB] (10 MBps) [2024-12-07T17:52:26.313Z] Copying: 844/1024 [MB] (10 MBps) [2024-12-07T17:52:27.256Z] Copying: 855/1024 [MB] (11 MBps) [2024-12-07T17:52:28.199Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-07T17:52:29.587Z] Copying: 877/1024 [MB] (11 MBps) [2024-12-07T17:52:30.161Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-07T17:52:31.551Z] Copying: 899/1024 [MB] (10 MBps) [2024-12-07T17:52:32.496Z] Copying: 909/1024 [MB] (10 MBps) [2024-12-07T17:52:33.441Z] Copying: 927/1024 [MB] (17 MBps) [2024-12-07T17:52:34.385Z] Copying: 948/1024 [MB] (20 MBps) [2024-12-07T17:52:35.327Z] Copying: 967/1024 [MB] (18 MBps) [2024-12-07T17:52:36.264Z] Copying: 991/1024 [MB] (24 MBps) [2024-12-07T17:52:37.204Z] Copying: 1009/1024 [MB] (18 MBps) [2024-12-07T17:52:37.467Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-07 17:52:37.254408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.085 [2024-12-07 17:52:37.254852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:04.085 [2024-12-07 17:52:37.254961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:04.085 [2024-12-07 17:52:37.255024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.085 [2024-12-07 17:52:37.255139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:04.085 [2024-12-07 17:52:37.258760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.085 [2024-12-07 17:52:37.259015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:04.085 [2024-12-07 17:52:37.259242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:36:04.085 [2024-12-07 17:52:37.259283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.085 [2024-12-07 17:52:37.259631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.085 [2024-12-07 17:52:37.259659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:04.085 [2024-12-07 17:52:37.259673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:36:04.085 [2024-12-07 17:52:37.259685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.085 [2024-12-07 17:52:37.259726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.085 [2024-12-07 17:52:37.259739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:04.085 [2024-12-07 17:52:37.259752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:04.085 [2024-12-07 17:52:37.259764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.085 [2024-12-07 17:52:37.259839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.085 [2024-12-07 17:52:37.259856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:04.085 [2024-12-07 17:52:37.259868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:36:04.085 [2024-12-07 17:52:37.259879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.085 [2024-12-07 17:52:37.259899] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:04.085 [2024-12-07 17:52:37.259917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:04.085 [2024-12-07 17:52:37.259949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.259961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.259973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.260004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.260016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.260029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:04.085 [2024-12-07 17:52:37.260041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.260999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:04.086 [2024-12-07 17:52:37.261082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.261094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.261105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.261127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.262421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.262436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:04.087 [2024-12-07 17:52:37.262462] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:04.087 [2024-12-07 17:52:37.262475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 58d6a6c1-aac4-4bca-8413-1635a6a457e5 00:36:04.087 [2024-12-07 17:52:37.262488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:04.087 [2024-12-07 17:52:37.262499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:36:04.087 [2024-12-07 17:52:37.262510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:36:04.087 [2024-12-07 17:52:37.262528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:36:04.087 [2024-12-07 17:52:37.262538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:04.087 [2024-12-07 17:52:37.262551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:04.087 [2024-12-07 17:52:37.262563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:04.087 [2024-12-07 17:52:37.262573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:04.087 [2024-12-07 17:52:37.262582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:04.087 [2024-12-07 17:52:37.262594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.087 [2024-12-07 17:52:37.262605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:04.087 [2024-12-07 17:52:37.262618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:36:04.087 [2024-12-07 17:52:37.262629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.279415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.087 [2024-12-07 17:52:37.279579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:04.087 [2024-12-07 17:52:37.279644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.759 ms 00:36:04.087 [2024-12-07 17:52:37.279667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.280093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:04.087 [2024-12-07 17:52:37.280138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:04.087 [2024-12-07 17:52:37.280270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:36:04.087 [2024-12-07 17:52:37.280293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.316942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.087 [2024-12-07 17:52:37.317132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:04.087 [2024-12-07 17:52:37.317195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.087 [2024-12-07 17:52:37.317220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.317336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.087 [2024-12-07 17:52:37.317363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:04.087 [2024-12-07 17:52:37.317388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.087 [2024-12-07 17:52:37.317540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.317626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.087 [2024-12-07 17:52:37.317688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:04.087 [2024-12-07 17:52:37.317711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.087 [2024-12-07 17:52:37.317732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.317762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.087 [2024-12-07 17:52:37.317831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:04.087 [2024-12-07 17:52:37.317855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.087 [2024-12-07 17:52:37.317874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.087 [2024-12-07 17:52:37.402197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.087 [2024-12-07 17:52:37.402411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:04.087 [2024-12-07 17:52:37.402471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.087 [2024-12-07 17:52:37.402495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.472085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.472272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:04.349 [2024-12-07 17:52:37.472334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.472358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.472461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.472486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:04.349 [2024-12-07 17:52:37.472516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.472536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.472588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.472690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:04.349 [2024-12-07 17:52:37.472711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.472731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.472832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.472857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:04.349 [2024-12-07 17:52:37.472917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.472941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.473014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.473219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:04.349 [2024-12-07 17:52:37.473268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.473288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.473486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.473512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:04.349 [2024-12-07 17:52:37.473534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.473633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.473694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:04.349 [2024-12-07 17:52:37.473750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:04.349 [2024-12-07 17:52:37.473775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:04.349 [2024-12-07 17:52:37.473795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:04.349 [2024-12-07 17:52:37.473951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 219.504 ms, result 0 00:36:04.921 00:36:04.921 00:36:04.921 17:52:38 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:07.485 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:07.486 Process with pid 83991 is not found 00:36:07.486 Remove shared memory files 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83991 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83991 ']' 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83991 00:36:07.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83991) - No such process 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83991 is not found' 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_band_md /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_l2p_l1 /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_l2p_l2 /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_l2p_l2_ctx /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_nvc_md /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_p2l_pool /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_sb /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_sb_shm /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_trim_bitmap /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_trim_log /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_trim_md /dev/hugepages/ftl_58d6a6c1-aac4-4bca-8413-1635a6a457e5_vmap 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:07.486 ************************************ 00:36:07.486 END TEST ftl_restore_fast 00:36:07.486 ************************************ 00:36:07.486 00:36:07.486 real 5m5.647s 00:36:07.486 user 4m53.972s 00:36:07.486 sys 0m11.223s 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:07.486 17:52:40 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@14 -- # killprocess 74977 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 74977 ']' 00:36:07.486 Process with pid 74977 is not found 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@958 -- # kill -0 74977 00:36:07.486 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74977) - No such process 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74977 is not found' 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87074 00:36:07.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87074 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@835 -- # '[' -z 87074 ']' 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.486 17:52:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.486 17:52:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:07.486 [2024-12-07 17:52:40.686194] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 24.03.0 initialization... 00:36:07.486 [2024-12-07 17:52:40.686344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87074 ] 00:36:07.486 [2024-12-07 17:52:40.848617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:07.791 [2024-12-07 17:52:40.972398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.369 17:52:41 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.369 17:52:41 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:08.369 17:52:41 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:08.629 nvme0n1 00:36:08.629 17:52:41 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:08.629 17:52:41 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:08.629 17:52:41 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:08.888 17:52:42 ftl -- ftl/common.sh@28 -- # stores=53a36342-1330-4801-90e9-67b5437aa65e 00:36:08.888 17:52:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:08.888 17:52:42 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 53a36342-1330-4801-90e9-67b5437aa65e 00:36:09.147 17:52:42 ftl -- ftl/ftl.sh@23 -- # killprocess 87074 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 87074 ']' 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@958 -- # kill -0 87074 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@959 -- # uname 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87074 00:36:09.147 killing process with pid 87074 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87074' 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@973 -- # kill 87074 00:36:09.147 17:52:42 ftl -- common/autotest_common.sh@978 -- # wait 87074 00:36:10.531 17:52:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:10.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:10.790 Waiting for block devices as requested 00:36:10.790 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:10.790 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:11.051 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:11.051 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:16.333 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:16.333 17:52:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:16.333 Remove shared memory files 00:36:16.333 17:52:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:16.333 17:52:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:16.333 17:52:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:16.333 17:52:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:16.333 17:52:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:16.333 17:52:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:16.333 ************************************ 00:36:16.333 END TEST ftl 00:36:16.333 ************************************ 00:36:16.333 00:36:16.333 real 18m21.660s 00:36:16.333 user 20m21.898s 00:36:16.333 sys 1m30.167s 00:36:16.333 17:52:49 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.333 17:52:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:16.333 17:52:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:16.333 17:52:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:16.333 17:52:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:16.333 17:52:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:16.333 17:52:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:16.333 17:52:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:16.333 17:52:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:16.333 17:52:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:16.333 17:52:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:16.333 17:52:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:16.333 17:52:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.333 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:36:16.333 17:52:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:16.333 17:52:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:16.333 17:52:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:16.333 17:52:49 -- common/autotest_common.sh@10 -- # set +x 00:36:17.717 INFO: APP EXITING 00:36:17.717 INFO: killing all VMs 00:36:17.717 INFO: killing vhost app 00:36:17.717 INFO: EXIT DONE 00:36:17.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:18.239 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:18.239 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:18.239 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:18.239 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:18.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:19.073 Cleaning 00:36:19.073 Removing: /var/run/dpdk/spdk0/config 00:36:19.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:19.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:19.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:19.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:19.073 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:19.073 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:19.073 Removing: /var/run/dpdk/spdk0 00:36:19.073 Removing: /var/run/dpdk/spdk_pid56940 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57137 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57344 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57441 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57476 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57599 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57617 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57810 00:36:19.073 Removing: /var/run/dpdk/spdk_pid57909 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58005 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58109 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58202 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58241 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58278 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58348 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58427 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58863 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58916 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58968 00:36:19.073 Removing: /var/run/dpdk/spdk_pid58984 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59075 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59091 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59182 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59198 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59251 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59269 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59322 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59340 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59495 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59531 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59615 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59787 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59865 00:36:19.073 Removing: /var/run/dpdk/spdk_pid59902 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60334 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60432 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60541 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60596 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60627 00:36:19.073 Removing: /var/run/dpdk/spdk_pid60700 00:36:19.073 Removing: /var/run/dpdk/spdk_pid61330 00:36:19.073 Removing: /var/run/dpdk/spdk_pid61367 00:36:19.073 Removing: /var/run/dpdk/spdk_pid61838 00:36:19.073 Removing: /var/run/dpdk/spdk_pid61936 00:36:19.073 Removing: /var/run/dpdk/spdk_pid62056 00:36:19.073 Removing: /var/run/dpdk/spdk_pid62109 00:36:19.073 Removing: /var/run/dpdk/spdk_pid62135 00:36:19.073 Removing: /var/run/dpdk/spdk_pid62160 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64006 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64132 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64136 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64154 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64200 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64204 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64216 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64261 00:36:19.073 Removing: /var/run/dpdk/spdk_pid64265 00:36:19.335 Removing: /var/run/dpdk/spdk_pid64277 00:36:19.335 Removing: /var/run/dpdk/spdk_pid64322 00:36:19.335 Removing: /var/run/dpdk/spdk_pid64326 00:36:19.335 Removing: /var/run/dpdk/spdk_pid64338 00:36:19.335 Removing: /var/run/dpdk/spdk_pid65726 00:36:19.335 Removing: /var/run/dpdk/spdk_pid65829 00:36:19.335 Removing: /var/run/dpdk/spdk_pid67232 00:36:19.335 Removing: /var/run/dpdk/spdk_pid68984 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69053 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69137 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69241 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69334 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69430 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69498 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69579 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69683 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69775 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69865 00:36:19.335 Removing: /var/run/dpdk/spdk_pid69939 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70014 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70123 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70211 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70307 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70381 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70456 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70556 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70653 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70749 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70812 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70892 00:36:19.335 Removing: /var/run/dpdk/spdk_pid70966 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71040 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71142 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71234 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71329 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71403 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71476 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71547 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71621 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71730 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71815 00:36:19.335 Removing: /var/run/dpdk/spdk_pid71959 00:36:19.335 Removing: /var/run/dpdk/spdk_pid72243 00:36:19.335 Removing: /var/run/dpdk/spdk_pid72274 00:36:19.335 Removing: /var/run/dpdk/spdk_pid72731 00:36:19.335 Removing: /var/run/dpdk/spdk_pid72919 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73012 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73127 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73175 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73195 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73494 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73560 00:36:19.335 Removing: /var/run/dpdk/spdk_pid73639 00:36:19.335 Removing: /var/run/dpdk/spdk_pid74027 00:36:19.335 Removing: /var/run/dpdk/spdk_pid74172 00:36:19.335 Removing: /var/run/dpdk/spdk_pid74977 00:36:19.335 Removing: /var/run/dpdk/spdk_pid75109 00:36:19.335 Removing: /var/run/dpdk/spdk_pid75276 00:36:19.335 Removing: /var/run/dpdk/spdk_pid75373 00:36:19.335 Removing: /var/run/dpdk/spdk_pid75688 00:36:19.335 Removing: /var/run/dpdk/spdk_pid75958 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76311 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76494 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76696 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76749 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76936 00:36:19.335 Removing: /var/run/dpdk/spdk_pid76961 00:36:19.335 Removing: /var/run/dpdk/spdk_pid77014 00:36:19.335 Removing: /var/run/dpdk/spdk_pid77291 00:36:19.335 Removing: /var/run/dpdk/spdk_pid77519 00:36:19.335 Removing: /var/run/dpdk/spdk_pid78112 00:36:19.335 Removing: /var/run/dpdk/spdk_pid78752 00:36:19.335 Removing: /var/run/dpdk/spdk_pid79336 00:36:19.335 Removing: /var/run/dpdk/spdk_pid80128 00:36:19.335 Removing: /var/run/dpdk/spdk_pid80281 00:36:19.335 Removing: /var/run/dpdk/spdk_pid80358 00:36:19.335 Removing: /var/run/dpdk/spdk_pid80838 00:36:19.335 Removing: /var/run/dpdk/spdk_pid80894 00:36:19.335 Removing: /var/run/dpdk/spdk_pid81607 00:36:19.335 Removing: /var/run/dpdk/spdk_pid82155 00:36:19.335 Removing: /var/run/dpdk/spdk_pid82949 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83070 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83113 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83166 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83243 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83296 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83493 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83586 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83647 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83741 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83779 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83846 00:36:19.335 Removing: /var/run/dpdk/spdk_pid83991 00:36:19.335 Removing: /var/run/dpdk/spdk_pid84221 00:36:19.335 Removing: /var/run/dpdk/spdk_pid84860 00:36:19.335 Removing: /var/run/dpdk/spdk_pid85678 00:36:19.335 Removing: /var/run/dpdk/spdk_pid86335 00:36:19.335 Removing: /var/run/dpdk/spdk_pid87074 00:36:19.335 Clean 00:36:19.597 17:52:52 -- common/autotest_common.sh@1453 -- # return 0 00:36:19.597 17:52:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:19.597 17:52:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.597 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:36:19.597 17:52:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:19.597 17:52:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.597 17:52:52 -- common/autotest_common.sh@10 -- # set +x 00:36:19.597 17:52:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:19.597 17:52:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:19.597 17:52:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:19.597 17:52:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:19.597 17:52:52 -- spdk/autotest.sh@398 -- # hostname 00:36:19.597 17:52:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:19.859 geninfo: WARNING: invalid characters removed from testname! 00:36:46.444 17:53:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:47.381 17:53:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:49.930 17:53:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:51.843 17:53:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:53.213 17:53:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:55.111 17:53:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:57.011 17:53:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:57.011 17:53:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:57.011 17:53:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:57.011 17:53:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:57.011 17:53:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:57.011 17:53:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:57.011 + [[ -n 5034 ]] 00:36:57.011 + sudo kill 5034 00:36:57.021 [Pipeline] } 00:36:57.035 [Pipeline] // timeout 00:36:57.039 [Pipeline] } 00:36:57.051 [Pipeline] // stage 00:36:57.055 [Pipeline] } 00:36:57.067 [Pipeline] // catchError 00:36:57.075 [Pipeline] stage 00:36:57.077 [Pipeline] { (Stop VM) 00:36:57.087 [Pipeline] sh 00:36:57.369 + vagrant halt 00:36:59.909 ==> default: Halting domain... 00:37:05.269 [Pipeline] sh 00:37:05.552 + vagrant destroy -f 00:37:08.095 ==> default: Removing domain... 00:37:08.685 [Pipeline] sh 00:37:08.977 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:08.988 [Pipeline] } 00:37:09.007 [Pipeline] // stage 00:37:09.014 [Pipeline] } 00:37:09.031 [Pipeline] // dir 00:37:09.038 [Pipeline] } 00:37:09.056 [Pipeline] // wrap 00:37:09.064 [Pipeline] } 00:37:09.078 [Pipeline] // catchError 00:37:09.090 [Pipeline] stage 00:37:09.092 [Pipeline] { (Epilogue) 00:37:09.108 [Pipeline] sh 00:37:09.396 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:14.687 [Pipeline] catchError 00:37:14.689 [Pipeline] { 00:37:14.702 [Pipeline] sh 00:37:14.986 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:14.986 Artifacts sizes are good 00:37:14.996 [Pipeline] } 00:37:15.012 [Pipeline] // catchError 00:37:15.024 [Pipeline] archiveArtifacts 00:37:15.032 Archiving artifacts 00:37:15.130 [Pipeline] cleanWs 00:37:15.140 [WS-CLEANUP] Deleting project workspace... 00:37:15.140 [WS-CLEANUP] Deferred wipeout is used... 00:37:15.147 [WS-CLEANUP] done 00:37:15.149 [Pipeline] } 00:37:15.169 [Pipeline] // stage 00:37:15.174 [Pipeline] } 00:37:15.189 [Pipeline] // node 00:37:15.196 [Pipeline] End of Pipeline 00:37:15.232 Finished: SUCCESS